00:00:00.001 Started by upstream project "autotest-per-patch" build number 132112 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.134 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.248 > git --version # 'git version 2.39.2' 00:00:00.248 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.281 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.281 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.339 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.351 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.362 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.362 > git config core.sparsecheckout # timeout=10 00:00:05.372 > git read-tree -mu HEAD # timeout=10 00:00:05.388 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.409 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.410 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.522 [Pipeline] Start of Pipeline 00:00:05.532 [Pipeline] library 00:00:05.533 Loading library shm_lib@master 00:00:05.533 Library shm_lib@master is cached. Copying from home. 00:00:05.546 [Pipeline] node 00:00:05.555 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.557 [Pipeline] { 00:00:05.566 [Pipeline] catchError 00:00:05.567 [Pipeline] { 00:00:05.579 [Pipeline] wrap 00:00:05.588 [Pipeline] { 00:00:05.596 [Pipeline] stage 00:00:05.598 [Pipeline] { (Prologue) 00:00:05.816 [Pipeline] sh 00:00:06.107 + logger -p user.info -t JENKINS-CI 00:00:06.126 [Pipeline] echo 00:00:06.128 Node: CYP9 00:00:06.136 [Pipeline] sh 00:00:06.449 [Pipeline] setCustomBuildProperty 00:00:06.457 [Pipeline] echo 00:00:06.458 Cleanup processes 00:00:06.461 [Pipeline] sh 00:00:06.748 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.748 310838 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.761 [Pipeline] sh 00:00:07.048 ++ grep -v 'sudo pgrep' 00:00:07.048 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.048 ++ awk '{print $1}' 00:00:07.048 + sudo kill -9 00:00:07.048 + true 00:00:07.060 [Pipeline] cleanWs 00:00:07.067 [WS-CLEANUP] Deleting project workspace... 00:00:07.067 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.073 [WS-CLEANUP] done 00:00:07.076 [Pipeline] setCustomBuildProperty 00:00:07.084 [Pipeline] sh 00:00:07.366 + sudo git config --global --replace-all safe.directory '*' 00:00:07.457 [Pipeline] httpRequest 00:00:08.014 [Pipeline] echo 00:00:08.016 Sorcerer 10.211.164.101 is alive 00:00:08.025 [Pipeline] retry 00:00:08.027 [Pipeline] { 00:00:08.041 [Pipeline] httpRequest 00:00:08.045 HttpMethod: GET 00:00:08.045 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.046 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.048 Response Code: HTTP/1.1 200 OK 00:00:08.049 Success: Status code 200 is in the accepted range: 200,404 00:00:08.049 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.101 [Pipeline] } 00:00:09.114 [Pipeline] // retry 00:00:09.120 [Pipeline] sh 00:00:09.404 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.421 [Pipeline] httpRequest 00:00:09.758 [Pipeline] echo 00:00:09.760 Sorcerer 10.211.164.101 is alive 00:00:09.769 [Pipeline] retry 00:00:09.771 [Pipeline] { 00:00:09.784 [Pipeline] httpRequest 00:00:09.788 HttpMethod: GET 00:00:09.789 URL: http://10.211.164.101/packages/spdk_cfcfe6c3e5f17d9eac3202c1cc92d7e39c091cc1.tar.gz 00:00:09.790 Sending request to url: http://10.211.164.101/packages/spdk_cfcfe6c3e5f17d9eac3202c1cc92d7e39c091cc1.tar.gz 00:00:09.808 Response Code: HTTP/1.1 200 OK 00:00:09.808 Success: Status code 200 is in the accepted range: 200,404 00:00:09.808 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_cfcfe6c3e5f17d9eac3202c1cc92d7e39c091cc1.tar.gz 00:01:06.235 [Pipeline] } 00:01:06.253 [Pipeline] // retry 00:01:06.261 [Pipeline] sh 00:01:06.552 + tar --no-same-owner -xf spdk_cfcfe6c3e5f17d9eac3202c1cc92d7e39c091cc1.tar.gz 00:01:09.868 [Pipeline] sh 00:01:10.158 + git -C spdk log --oneline -n5 00:01:10.158 cfcfe6c3e bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:10.158 4aa7d50c3 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:01:10.158 b1e5d8902 dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:01:10.158 00ed84136 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:01:10.158 141b95f6d dif: Rename internal generate/verify_copy() by insert/strip_copy() 00:01:10.171 [Pipeline] } 00:01:10.186 [Pipeline] // stage 00:01:10.195 [Pipeline] stage 00:01:10.198 [Pipeline] { (Prepare) 00:01:10.215 [Pipeline] writeFile 00:01:10.231 [Pipeline] sh 00:01:10.520 + logger -p user.info -t JENKINS-CI 00:01:10.535 [Pipeline] sh 00:01:10.823 + logger -p user.info -t JENKINS-CI 00:01:10.837 [Pipeline] sh 00:01:11.125 + cat autorun-spdk.conf 00:01:11.125 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.125 SPDK_TEST_NVMF=1 00:01:11.125 SPDK_TEST_NVME_CLI=1 00:01:11.125 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.125 SPDK_TEST_NVMF_NICS=e810 00:01:11.125 SPDK_TEST_VFIOUSER=1 00:01:11.125 SPDK_RUN_UBSAN=1 00:01:11.125 NET_TYPE=phy 00:01:11.134 RUN_NIGHTLY=0 00:01:11.139 [Pipeline] readFile 00:01:11.166 [Pipeline] withEnv 00:01:11.169 [Pipeline] { 00:01:11.182 [Pipeline] sh 00:01:11.472 + set -ex 00:01:11.472 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:11.472 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.472 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.472 ++ SPDK_TEST_NVMF=1 00:01:11.472 ++ SPDK_TEST_NVME_CLI=1 00:01:11.472 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.472 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.472 ++ SPDK_TEST_VFIOUSER=1 00:01:11.472 ++ SPDK_RUN_UBSAN=1 00:01:11.472 ++ NET_TYPE=phy 00:01:11.472 ++ RUN_NIGHTLY=0 00:01:11.472 + case $SPDK_TEST_NVMF_NICS in 00:01:11.472 + DRIVERS=ice 00:01:11.472 + [[ tcp == \r\d\m\a ]] 00:01:11.472 + [[ -n ice ]] 00:01:11.472 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.472 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:11.472 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:11.472 rmmod: ERROR: Module irdma is not currently loaded 00:01:11.472 rmmod: ERROR: Module i40iw is not currently loaded 00:01:11.472 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:11.472 + true 00:01:11.472 + for D in $DRIVERS 00:01:11.472 + sudo modprobe ice 00:01:11.472 + exit 0 00:01:11.482 [Pipeline] } 00:01:11.500 [Pipeline] // withEnv 00:01:11.504 [Pipeline] } 00:01:11.519 [Pipeline] // stage 00:01:11.530 [Pipeline] catchError 00:01:11.531 [Pipeline] { 00:01:11.546 [Pipeline] timeout 00:01:11.546 Timeout set to expire in 1 hr 0 min 00:01:11.548 [Pipeline] { 00:01:11.563 [Pipeline] stage 00:01:11.565 [Pipeline] { (Tests) 00:01:11.579 [Pipeline] sh 00:01:11.869 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.869 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.869 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.869 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:11.869 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.869 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.869 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:11.869 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.869 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.869 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.869 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:11.869 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.869 + source /etc/os-release 00:01:11.869 ++ NAME='Fedora Linux' 00:01:11.869 ++ VERSION='39 (Cloud Edition)' 00:01:11.869 ++ ID=fedora 00:01:11.869 ++ VERSION_ID=39 00:01:11.869 ++ VERSION_CODENAME= 00:01:11.869 ++ PLATFORM_ID=platform:f39 00:01:11.869 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:11.869 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:11.869 ++ LOGO=fedora-logo-icon 00:01:11.869 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:11.869 ++ HOME_URL=https://fedoraproject.org/ 00:01:11.869 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:11.869 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:11.869 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:11.869 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:11.869 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:11.869 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:11.869 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:11.869 ++ SUPPORT_END=2024-11-12 00:01:11.869 ++ VARIANT='Cloud Edition' 00:01:11.869 ++ VARIANT_ID=cloud 00:01:11.869 + uname -a 00:01:11.869 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:11.869 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.172 Hugepages 00:01:15.172 node hugesize free / total 00:01:15.172 node0 1048576kB 0 / 0 00:01:15.172 node0 2048kB 0 / 0 00:01:15.172 node1 1048576kB 0 / 0 00:01:15.172 node1 2048kB 0 / 0 00:01:15.172 00:01:15.172 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.172 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:15.172 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:15.172 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:15.172 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:15.172 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:15.172 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:15.172 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:15.172 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:15.172 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:15.172 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:15.172 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:15.172 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:15.172 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:15.172 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:15.172 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:15.172 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:15.172 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:15.172 + rm -f /tmp/spdk-ld-path 00:01:15.172 + source autorun-spdk.conf 00:01:15.172 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.172 ++ SPDK_TEST_NVMF=1 00:01:15.172 ++ SPDK_TEST_NVME_CLI=1 00:01:15.172 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.172 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.172 ++ SPDK_TEST_VFIOUSER=1 00:01:15.172 ++ SPDK_RUN_UBSAN=1 00:01:15.172 ++ NET_TYPE=phy 00:01:15.172 ++ RUN_NIGHTLY=0 00:01:15.172 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.172 + [[ -n '' ]] 00:01:15.172 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.172 + for M in /var/spdk/build-*-manifest.txt 00:01:15.172 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:15.172 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.172 + for M in /var/spdk/build-*-manifest.txt 00:01:15.172 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.172 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.172 + for M in /var/spdk/build-*-manifest.txt 00:01:15.172 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.172 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.172 ++ uname 00:01:15.172 + [[ Linux == \L\i\n\u\x ]] 00:01:15.172 + sudo dmesg -T 00:01:15.172 + sudo dmesg --clear 00:01:15.172 + dmesg_pid=311815 00:01:15.172 + [[ Fedora Linux == FreeBSD ]] 00:01:15.172 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.172 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.172 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.172 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.172 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.172 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.172 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.172 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.172 + sudo dmesg -Tw 00:01:15.172 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.172 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.172 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.172 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.172 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.172 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.172 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.172 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.172 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.172 13:25:38 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:15.172 13:25:38 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.172 13:25:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.172 13:25:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:15.172 13:25:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:15.172 13:25:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.172 13:25:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:15.172 13:25:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:15.172 13:25:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:15.172 13:25:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:15.172 13:25:38 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:15.172 13:25:38 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:15.172 13:25:38 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.172 13:25:38 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:15.172 13:25:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.172 13:25:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:15.172 13:25:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.172 13:25:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.172 13:25:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.172 13:25:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.172 13:25:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.172 13:25:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.172 13:25:38 -- paths/export.sh@5 -- $ export PATH 00:01:15.172 13:25:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.172 13:25:38 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.172 13:25:38 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:15.172 13:25:38 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730895938.XXXXXX 00:01:15.172 13:25:38 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730895938.c5G2mz 00:01:15.172 13:25:38 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:15.172 13:25:38 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:15.172 13:25:38 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:15.173 13:25:38 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.173 13:25:38 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.173 13:25:38 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:15.173 13:25:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:15.173 13:25:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.173 13:25:38 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.173 13:25:38 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:15.173 13:25:38 -- pm/common@17 -- $ local monitor 00:01:15.173 13:25:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.173 13:25:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.173 13:25:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.173 13:25:38 -- pm/common@21 -- $ date +%s 00:01:15.173 13:25:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.173 13:25:38 -- pm/common@21 -- $ date +%s 00:01:15.173 13:25:38 -- pm/common@25 -- $ sleep 1 00:01:15.173 13:25:38 -- pm/common@21 -- $ date +%s 00:01:15.173 13:25:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730895938 00:01:15.173 13:25:38 -- pm/common@21 -- $ date +%s 00:01:15.173 13:25:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730895938 00:01:15.173 13:25:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730895938 00:01:15.173 13:25:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730895938 00:01:15.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730895938_collect-cpu-load.pm.log 00:01:15.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730895938_collect-vmstat.pm.log 00:01:15.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730895938_collect-cpu-temp.pm.log 00:01:15.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730895938_collect-bmc-pm.bmc.pm.log 00:01:16.116 13:25:39 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:16.116 13:25:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.116 13:25:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.116 13:25:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.116 13:25:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.116 Wed Nov 6 12:25:39 PM UTC 2024 00:01:16.116 13:25:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.116 v25.01-pre-175-gcfcfe6c3e 00:01:16.116 13:25:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.116 13:25:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.116 13:25:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.116 13:25:39 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:16.116 13:25:39 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:16.116 13:25:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.116 ************************************ 00:01:16.116 START TEST ubsan 00:01:16.116 ************************************ 00:01:16.116 13:25:39 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:16.116 using ubsan 00:01:16.116 00:01:16.116 real 0m0.001s 00:01:16.116 user 0m0.000s 00:01:16.116 sys 0m0.001s 00:01:16.116 13:25:39 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:16.116 13:25:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.116 ************************************ 00:01:16.116 END TEST ubsan 00:01:16.116 ************************************ 00:01:16.381 13:25:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:16.381 13:25:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:16.381 13:25:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:16.381 13:25:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:16.381 13:25:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:16.381 13:25:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:16.381 13:25:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:16.381 13:25:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:16.381 13:25:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:16.381 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:16.381 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.954 Using 'verbs' RDMA provider 00:01:32.654 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:44.879 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:44.879 Creating mk/config.mk...done. 00:01:44.879 Creating mk/cc.flags.mk...done. 00:01:44.879 Type 'make' to build. 00:01:44.879 13:26:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:44.879 13:26:08 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:44.879 13:26:08 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:44.879 13:26:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.879 ************************************ 00:01:44.879 START TEST make 00:01:44.879 ************************************ 00:01:44.879 13:26:08 make -- common/autotest_common.sh@1127 -- $ make -j144 00:01:45.451 make[1]: Nothing to be done for 'all'. 00:01:46.837 The Meson build system 00:01:46.837 Version: 1.5.0 00:01:46.837 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:46.837 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.837 Build type: native build 00:01:46.837 Project name: libvfio-user 00:01:46.837 Project version: 0.0.1 00:01:46.837 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:46.837 C linker for the host machine: cc ld.bfd 2.40-14 00:01:46.837 Host machine cpu family: x86_64 00:01:46.837 Host machine cpu: x86_64 00:01:46.837 Run-time dependency threads found: YES 00:01:46.837 Library dl found: YES 00:01:46.837 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:46.837 Run-time dependency json-c found: YES 0.17 00:01:46.837 Run-time dependency cmocka found: YES 1.1.7 00:01:46.837 Program pytest-3 found: NO 00:01:46.837 Program flake8 found: NO 00:01:46.837 Program misspell-fixer found: NO 00:01:46.837 Program restructuredtext-lint found: NO 00:01:46.837 Program valgrind found: YES (/usr/bin/valgrind) 00:01:46.837 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.837 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.837 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.837 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.837 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:46.837 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:46.837 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.837 Build targets in project: 8 00:01:46.837 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:46.837 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:46.837 00:01:46.837 libvfio-user 0.0.1 00:01:46.837 00:01:46.837 User defined options 00:01:46.837 buildtype : debug 00:01:46.837 default_library: shared 00:01:46.837 libdir : /usr/local/lib 00:01:46.837 00:01:46.837 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.096 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.096 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:47.096 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:47.096 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:47.096 [4/37] Compiling C object samples/null.p/null.c.o 00:01:47.096 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:47.096 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:47.096 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:47.096 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:47.096 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:47.096 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:47.096 [11/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:47.096 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:47.096 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:47.096 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:47.096 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:47.096 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:47.096 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:47.096 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:47.096 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:47.096 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:47.096 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:47.096 [22/37] Compiling C object samples/server.p/server.c.o 00:01:47.096 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:47.096 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:47.096 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:47.096 [26/37] Compiling C object samples/client.p/client.c.o 00:01:47.096 [27/37] Linking target samples/client 00:01:47.096 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:47.096 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:47.358 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:47.358 [31/37] Linking target test/unit_tests 00:01:47.358 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:47.358 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:47.358 [34/37] Linking target samples/server 00:01:47.358 [35/37] Linking target samples/null 00:01:47.358 [36/37] Linking target samples/gpio-pci-idio-16 00:01:47.358 [37/37] Linking target samples/lspci 00:01:47.358 INFO: autodetecting backend as ninja 00:01:47.358 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.619 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.880 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.880 ninja: no work to do. 00:01:54.474 The Meson build system 00:01:54.474 Version: 1.5.0 00:01:54.474 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:54.474 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:54.474 Build type: native build 00:01:54.474 Program cat found: YES (/usr/bin/cat) 00:01:54.474 Project name: DPDK 00:01:54.474 Project version: 24.03.0 00:01:54.474 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.474 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.474 Host machine cpu family: x86_64 00:01:54.474 Host machine cpu: x86_64 00:01:54.474 Message: ## Building in Developer Mode ## 00:01:54.474 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.474 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.474 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.474 Program python3 found: YES (/usr/bin/python3) 00:01:54.474 Program cat found: YES (/usr/bin/cat) 00:01:54.474 Compiler for C supports arguments -march=native: YES 00:01:54.474 Checking for size of "void *" : 8 00:01:54.474 Checking for size of "void *" : 8 (cached) 00:01:54.474 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:54.474 Library m found: YES 00:01:54.474 Library numa found: YES 00:01:54.474 Has header "numaif.h" : YES 00:01:54.474 Library fdt found: NO 00:01:54.474 Library execinfo found: NO 00:01:54.474 Has header "execinfo.h" : YES 00:01:54.474 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.474 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.474 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.474 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.474 Run-time dependency openssl found: YES 3.1.1 00:01:54.474 Run-time dependency libpcap found: YES 1.10.4 00:01:54.474 Has header "pcap.h" with dependency libpcap: YES 00:01:54.474 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.474 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.474 Compiler for C supports arguments -Wformat: YES 00:01:54.474 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.474 Compiler for C supports arguments -Wformat-security: NO 00:01:54.474 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.474 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.474 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.474 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.474 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.474 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.474 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.474 Compiler for C supports arguments -Wundef: YES 00:01:54.475 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.475 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.475 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.475 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.475 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.475 Program objdump found: YES (/usr/bin/objdump) 00:01:54.475 Compiler for C supports arguments -mavx512f: YES 00:01:54.475 Checking if "AVX512 checking" compiles: YES 00:01:54.475 Fetching value of define "__SSE4_2__" : 1 00:01:54.475 Fetching value of define "__AES__" : 1 00:01:54.475 Fetching value of define "__AVX__" : 1 00:01:54.475 Fetching value of define "__AVX2__" : 1 00:01:54.475 Fetching value of define "__AVX512BW__" : 1 00:01:54.475 Fetching value of define "__AVX512CD__" : 1 00:01:54.475 Fetching value of define "__AVX512DQ__" : 1 00:01:54.475 Fetching value of define "__AVX512F__" : 1 00:01:54.475 Fetching value of define "__AVX512VL__" : 1 00:01:54.475 Fetching value of define "__PCLMUL__" : 1 00:01:54.475 Fetching value of define "__RDRND__" : 1 00:01:54.475 Fetching value of define "__RDSEED__" : 1 00:01:54.475 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:54.475 Fetching value of define "__znver1__" : (undefined) 00:01:54.475 Fetching value of define "__znver2__" : (undefined) 00:01:54.475 Fetching value of define "__znver3__" : (undefined) 00:01:54.475 Fetching value of define "__znver4__" : (undefined) 00:01:54.475 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.475 Message: lib/log: Defining dependency "log" 00:01:54.475 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.475 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.475 Checking for function "getentropy" : NO 00:01:54.475 Message: lib/eal: Defining dependency "eal" 00:01:54.475 Message: lib/ring: Defining dependency "ring" 00:01:54.475 Message: lib/rcu: Defining dependency "rcu" 00:01:54.475 Message: lib/mempool: Defining dependency "mempool" 00:01:54.475 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.475 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.475 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:54.475 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:54.475 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:54.475 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:54.475 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:54.475 Compiler for C supports arguments -mpclmul: YES 00:01:54.475 Compiler for C supports arguments -maes: YES 00:01:54.475 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.475 Compiler for C supports arguments -mavx512bw: YES 00:01:54.475 Compiler for C supports arguments -mavx512dq: YES 00:01:54.475 Compiler for C supports arguments -mavx512vl: YES 00:01:54.475 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.475 Compiler for C supports arguments -mavx2: YES 00:01:54.475 Compiler for C supports arguments -mavx: YES 00:01:54.475 Message: lib/net: Defining dependency "net" 00:01:54.475 Message: lib/meter: Defining dependency "meter" 00:01:54.475 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.475 Message: lib/pci: Defining dependency "pci" 00:01:54.475 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.475 Message: lib/hash: Defining dependency "hash" 00:01:54.475 Message: lib/timer: Defining dependency "timer" 00:01:54.475 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.475 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.475 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.475 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.475 Message: lib/power: Defining dependency "power" 00:01:54.475 Message: lib/reorder: Defining dependency "reorder" 00:01:54.475 Message: lib/security: Defining dependency "security" 00:01:54.475 Has header "linux/userfaultfd.h" : YES 00:01:54.475 Has header "linux/vduse.h" : YES 00:01:54.475 Message: lib/vhost: Defining dependency "vhost" 00:01:54.475 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.475 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.475 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.475 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.475 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.475 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.475 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.475 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.475 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.475 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.475 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:54.475 Configuring doxy-api-html.conf using configuration 00:01:54.475 Configuring doxy-api-man.conf using configuration 00:01:54.475 Program mandb found: YES (/usr/bin/mandb) 00:01:54.475 Program sphinx-build found: NO 00:01:54.475 Configuring rte_build_config.h using configuration 00:01:54.475 Message: 00:01:54.475 ================= 00:01:54.475 Applications Enabled 00:01:54.475 ================= 00:01:54.475 00:01:54.475 apps: 00:01:54.475 00:01:54.475 00:01:54.475 Message: 00:01:54.475 ================= 00:01:54.475 Libraries Enabled 00:01:54.475 ================= 00:01:54.475 00:01:54.475 libs: 00:01:54.475 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.475 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.475 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.475 00:01:54.475 Message: 00:01:54.475 =============== 00:01:54.475 Drivers Enabled 00:01:54.475 =============== 00:01:54.475 00:01:54.475 common: 00:01:54.475 00:01:54.475 bus: 00:01:54.475 pci, vdev, 00:01:54.475 mempool: 00:01:54.475 ring, 00:01:54.475 dma: 00:01:54.475 00:01:54.475 net: 00:01:54.475 00:01:54.475 crypto: 00:01:54.475 00:01:54.475 compress: 00:01:54.475 00:01:54.475 vdpa: 00:01:54.475 00:01:54.475 00:01:54.475 Message: 00:01:54.475 ================= 00:01:54.475 Content Skipped 00:01:54.475 ================= 00:01:54.475 00:01:54.475 apps: 00:01:54.475 dumpcap: explicitly disabled via build config 00:01:54.475 graph: explicitly disabled via build config 00:01:54.475 pdump: explicitly disabled via build config 00:01:54.475 proc-info: explicitly disabled via build config 00:01:54.475 test-acl: explicitly disabled via build config 00:01:54.475 test-bbdev: explicitly disabled via build config 00:01:54.475 test-cmdline: explicitly disabled via build config 00:01:54.475 test-compress-perf: explicitly disabled via build config 00:01:54.475 test-crypto-perf: explicitly disabled via build config 00:01:54.475 test-dma-perf: explicitly disabled via build config 00:01:54.475 test-eventdev: explicitly disabled via build config 00:01:54.475 test-fib: explicitly disabled via build config 00:01:54.475 test-flow-perf: explicitly disabled via build config 00:01:54.475 test-gpudev: explicitly disabled via build config 00:01:54.475 test-mldev: explicitly disabled via build config 00:01:54.475 test-pipeline: explicitly disabled via build config 00:01:54.475 test-pmd: explicitly disabled via build config 00:01:54.475 test-regex: explicitly disabled via build config 00:01:54.475 test-sad: explicitly disabled via build config 00:01:54.475 test-security-perf: explicitly disabled via build config 00:01:54.475 00:01:54.475 libs: 00:01:54.475 argparse: explicitly disabled via build config 00:01:54.475 metrics: explicitly disabled via build config 00:01:54.475 acl: explicitly disabled via build config 00:01:54.475 bbdev: explicitly disabled via build config 00:01:54.475 bitratestats: explicitly disabled via build config 00:01:54.475 bpf: explicitly disabled via build config 00:01:54.475 cfgfile: explicitly disabled via build config 00:01:54.475 distributor: explicitly disabled via build config 00:01:54.475 efd: explicitly disabled via build config 00:01:54.475 eventdev: explicitly disabled via build config 00:01:54.475 dispatcher: explicitly disabled via build config 00:01:54.475 gpudev: explicitly disabled via build config 00:01:54.475 gro: explicitly disabled via build config 00:01:54.475 gso: explicitly disabled via build config 00:01:54.475 ip_frag: explicitly disabled via build config 00:01:54.475 jobstats: explicitly disabled via build config 00:01:54.475 latencystats: explicitly disabled via build config 00:01:54.475 lpm: explicitly disabled via build config 00:01:54.475 member: explicitly disabled via build config 00:01:54.475 pcapng: explicitly disabled via build config 00:01:54.475 rawdev: explicitly disabled via build config 00:01:54.475 regexdev: explicitly disabled via build config 00:01:54.475 mldev: explicitly disabled via build config 00:01:54.475 rib: explicitly disabled via build config 00:01:54.475 sched: explicitly disabled via build config 00:01:54.475 stack: explicitly disabled via build config 00:01:54.475 ipsec: explicitly disabled via build config 00:01:54.475 pdcp: explicitly disabled via build config 00:01:54.475 fib: explicitly disabled via build config 00:01:54.475 port: explicitly disabled via build config 00:01:54.475 pdump: explicitly disabled via build config 00:01:54.475 table: explicitly disabled via build config 00:01:54.475 pipeline: explicitly disabled via build config 00:01:54.475 graph: explicitly disabled via build config 00:01:54.475 node: explicitly disabled via build config 00:01:54.475 00:01:54.475 drivers: 00:01:54.475 common/cpt: not in enabled drivers build config 00:01:54.475 common/dpaax: not in enabled drivers build config 00:01:54.475 common/iavf: not in enabled drivers build config 00:01:54.475 common/idpf: not in enabled drivers build config 00:01:54.475 common/ionic: not in enabled drivers build config 00:01:54.476 common/mvep: not in enabled drivers build config 00:01:54.476 common/octeontx: not in enabled drivers build config 00:01:54.476 bus/auxiliary: not in enabled drivers build config 00:01:54.476 bus/cdx: not in enabled drivers build config 00:01:54.476 bus/dpaa: not in enabled drivers build config 00:01:54.476 bus/fslmc: not in enabled drivers build config 00:01:54.476 bus/ifpga: not in enabled drivers build config 00:01:54.476 bus/platform: not in enabled drivers build config 00:01:54.476 bus/uacce: not in enabled drivers build config 00:01:54.476 bus/vmbus: not in enabled drivers build config 00:01:54.476 common/cnxk: not in enabled drivers build config 00:01:54.476 common/mlx5: not in enabled drivers build config 00:01:54.476 common/nfp: not in enabled drivers build config 00:01:54.476 common/nitrox: not in enabled drivers build config 00:01:54.476 common/qat: not in enabled drivers build config 00:01:54.476 common/sfc_efx: not in enabled drivers build config 00:01:54.476 mempool/bucket: not in enabled drivers build config 00:01:54.476 mempool/cnxk: not in enabled drivers build config 00:01:54.476 mempool/dpaa: not in enabled drivers build config 00:01:54.476 mempool/dpaa2: not in enabled drivers build config 00:01:54.476 mempool/octeontx: not in enabled drivers build config 00:01:54.476 mempool/stack: not in enabled drivers build config 00:01:54.476 dma/cnxk: not in enabled drivers build config 00:01:54.476 dma/dpaa: not in enabled drivers build config 00:01:54.476 dma/dpaa2: not in enabled drivers build config 00:01:54.476 dma/hisilicon: not in enabled drivers build config 00:01:54.476 dma/idxd: not in enabled drivers build config 00:01:54.476 dma/ioat: not in enabled drivers build config 00:01:54.476 dma/skeleton: not in enabled drivers build config 00:01:54.476 net/af_packet: not in enabled drivers build config 00:01:54.476 net/af_xdp: not in enabled drivers build config 00:01:54.476 net/ark: not in enabled drivers build config 00:01:54.476 net/atlantic: not in enabled drivers build config 00:01:54.476 net/avp: not in enabled drivers build config 00:01:54.476 net/axgbe: not in enabled drivers build config 00:01:54.476 net/bnx2x: not in enabled drivers build config 00:01:54.476 net/bnxt: not in enabled drivers build config 00:01:54.476 net/bonding: not in enabled drivers build config 00:01:54.476 net/cnxk: not in enabled drivers build config 00:01:54.476 net/cpfl: not in enabled drivers build config 00:01:54.476 net/cxgbe: not in enabled drivers build config 00:01:54.476 net/dpaa: not in enabled drivers build config 00:01:54.476 net/dpaa2: not in enabled drivers build config 00:01:54.476 net/e1000: not in enabled drivers build config 00:01:54.476 net/ena: not in enabled drivers build config 00:01:54.476 net/enetc: not in enabled drivers build config 00:01:54.476 net/enetfec: not in enabled drivers build config 00:01:54.476 net/enic: not in enabled drivers build config 00:01:54.476 net/failsafe: not in enabled drivers build config 00:01:54.476 net/fm10k: not in enabled drivers build config 00:01:54.476 net/gve: not in enabled drivers build config 00:01:54.476 net/hinic: not in enabled drivers build config 00:01:54.476 net/hns3: not in enabled drivers build config 00:01:54.476 net/i40e: not in enabled drivers build config 00:01:54.476 net/iavf: not in enabled drivers build config 00:01:54.476 net/ice: not in enabled drivers build config 00:01:54.476 net/idpf: not in enabled drivers build config 00:01:54.476 net/igc: not in enabled drivers build config 00:01:54.476 net/ionic: not in enabled drivers build config 00:01:54.476 net/ipn3ke: not in enabled drivers build config 00:01:54.476 net/ixgbe: not in enabled drivers build config 00:01:54.476 net/mana: not in enabled drivers build config 00:01:54.476 net/memif: not in enabled drivers build config 00:01:54.476 net/mlx4: not in enabled drivers build config 00:01:54.476 net/mlx5: not in enabled drivers build config 00:01:54.476 net/mvneta: not in enabled drivers build config 00:01:54.476 net/mvpp2: not in enabled drivers build config 00:01:54.476 net/netvsc: not in enabled drivers build config 00:01:54.476 net/nfb: not in enabled drivers build config 00:01:54.476 net/nfp: not in enabled drivers build config 00:01:54.476 net/ngbe: not in enabled drivers build config 00:01:54.476 net/null: not in enabled drivers build config 00:01:54.476 net/octeontx: not in enabled drivers build config 00:01:54.476 net/octeon_ep: not in enabled drivers build config 00:01:54.476 net/pcap: not in enabled drivers build config 00:01:54.476 net/pfe: not in enabled drivers build config 00:01:54.476 net/qede: not in enabled drivers build config 00:01:54.476 net/ring: not in enabled drivers build config 00:01:54.476 net/sfc: not in enabled drivers build config 00:01:54.476 net/softnic: not in enabled drivers build config 00:01:54.476 net/tap: not in enabled drivers build config 00:01:54.476 net/thunderx: not in enabled drivers build config 00:01:54.476 net/txgbe: not in enabled drivers build config 00:01:54.476 net/vdev_netvsc: not in enabled drivers build config 00:01:54.476 net/vhost: not in enabled drivers build config 00:01:54.476 net/virtio: not in enabled drivers build config 00:01:54.476 net/vmxnet3: not in enabled drivers build config 00:01:54.476 raw/*: missing internal dependency, "rawdev" 00:01:54.476 crypto/armv8: not in enabled drivers build config 00:01:54.476 crypto/bcmfs: not in enabled drivers build config 00:01:54.476 crypto/caam_jr: not in enabled drivers build config 00:01:54.476 crypto/ccp: not in enabled drivers build config 00:01:54.476 crypto/cnxk: not in enabled drivers build config 00:01:54.476 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.476 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.476 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.476 crypto/mlx5: not in enabled drivers build config 00:01:54.476 crypto/mvsam: not in enabled drivers build config 00:01:54.476 crypto/nitrox: not in enabled drivers build config 00:01:54.476 crypto/null: not in enabled drivers build config 00:01:54.476 crypto/octeontx: not in enabled drivers build config 00:01:54.476 crypto/openssl: not in enabled drivers build config 00:01:54.476 crypto/scheduler: not in enabled drivers build config 00:01:54.476 crypto/uadk: not in enabled drivers build config 00:01:54.476 crypto/virtio: not in enabled drivers build config 00:01:54.476 compress/isal: not in enabled drivers build config 00:01:54.476 compress/mlx5: not in enabled drivers build config 00:01:54.476 compress/nitrox: not in enabled drivers build config 00:01:54.476 compress/octeontx: not in enabled drivers build config 00:01:54.476 compress/zlib: not in enabled drivers build config 00:01:54.476 regex/*: missing internal dependency, "regexdev" 00:01:54.476 ml/*: missing internal dependency, "mldev" 00:01:54.476 vdpa/ifc: not in enabled drivers build config 00:01:54.476 vdpa/mlx5: not in enabled drivers build config 00:01:54.476 vdpa/nfp: not in enabled drivers build config 00:01:54.476 vdpa/sfc: not in enabled drivers build config 00:01:54.476 event/*: missing internal dependency, "eventdev" 00:01:54.476 baseband/*: missing internal dependency, "bbdev" 00:01:54.476 gpu/*: missing internal dependency, "gpudev" 00:01:54.476 00:01:54.476 00:01:54.476 Build targets in project: 84 00:01:54.476 00:01:54.476 DPDK 24.03.0 00:01:54.476 00:01:54.476 User defined options 00:01:54.476 buildtype : debug 00:01:54.476 default_library : shared 00:01:54.476 libdir : lib 00:01:54.476 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:54.476 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.476 c_link_args : 00:01:54.476 cpu_instruction_set: native 00:01:54.476 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:54.476 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:54.476 enable_docs : false 00:01:54.476 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:54.476 enable_kmods : false 00:01:54.476 max_lcores : 128 00:01:54.476 tests : false 00:01:54.476 00:01:54.476 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.476 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:54.476 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.476 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.476 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.476 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.476 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.476 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.476 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.476 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:54.476 [9/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.476 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.735 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:54.735 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.735 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.735 [14/267] Linking static target lib/librte_kvargs.a 00:01:54.735 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.735 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.735 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:54.735 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:54.735 [19/267] Linking static target lib/librte_log.a 00:01:54.735 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.735 [21/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.735 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:54.735 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:54.735 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:54.735 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:54.735 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:54.735 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:54.735 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:54.735 [29/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:54.735 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:54.735 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:54.735 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:54.736 [33/267] Linking static target lib/librte_pci.a 00:01:54.736 [34/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:54.736 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:54.736 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:54.736 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.995 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:54.995 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:54.995 [40/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:54.995 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:54.995 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:54.995 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.995 [44/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.995 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.995 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.995 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:54.995 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.995 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.995 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.995 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.995 [52/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.995 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:54.995 [54/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.995 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.995 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:54.995 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:54.995 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:54.995 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:54.995 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:54.995 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.995 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.995 [63/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:54.995 [64/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:54.995 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:54.995 [66/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.995 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.995 [68/267] Linking static target lib/librte_telemetry.a 00:01:54.995 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.995 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:54.995 [71/267] Linking static target lib/librte_meter.a 00:01:54.995 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.995 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:54.995 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:54.995 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:54.995 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:54.995 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:54.995 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:54.995 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:54.995 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.995 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:54.995 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:54.995 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.256 [84/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.256 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.256 [86/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.256 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.256 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.256 [89/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:55.256 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.256 [91/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.256 [92/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:55.256 [93/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.256 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.256 [95/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.256 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.256 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.256 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.256 [99/267] Linking static target lib/librte_timer.a 00:01:55.256 [100/267] Linking static target lib/librte_ring.a 00:01:55.256 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.256 [102/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.256 [103/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.256 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.256 [105/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.256 [106/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:55.256 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.256 [108/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:55.256 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:55.256 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.256 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.256 [112/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.256 [113/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.256 [114/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.256 [115/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.256 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.256 [117/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.256 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.256 [119/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.256 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.256 [121/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.256 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.256 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.256 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.256 [125/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.256 [126/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.256 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.256 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.256 [129/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.256 [130/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:55.256 [131/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.256 [132/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.256 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.256 [134/267] Linking static target lib/librte_cmdline.a 00:01:55.256 [135/267] Linking static target lib/librte_dmadev.a 00:01:55.256 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.256 [137/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.256 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.256 [139/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.256 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.256 [141/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.256 [142/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.256 [143/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.256 [144/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.256 [145/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.256 [146/267] Linking static target lib/librte_net.a 00:01:55.256 [147/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.256 [148/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.256 [149/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.256 [150/267] Linking static target lib/librte_mempool.a 00:01:55.256 [151/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.256 [152/267] Linking target lib/librte_log.so.24.1 00:01:55.256 [153/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.257 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:55.257 [155/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.257 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.257 [157/267] Linking static target lib/librte_security.a 00:01:55.257 [158/267] Linking static target lib/librte_power.a 00:01:55.257 [159/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.257 [160/267] Linking static target lib/librte_compressdev.a 00:01:55.257 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.257 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:55.257 [163/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.257 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.257 [165/267] Linking static target lib/librte_rcu.a 00:01:55.257 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.257 [167/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.257 [168/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:55.257 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:55.257 [170/267] Linking static target lib/librte_eal.a 00:01:55.257 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:55.257 [172/267] Linking static target lib/librte_reorder.a 00:01:55.257 [173/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.257 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.257 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.257 [176/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.519 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.519 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.519 [179/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.519 [180/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.519 [181/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.519 [182/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.519 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.519 [184/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.519 [185/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.519 [186/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.519 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:55.519 [188/267] Linking target lib/librte_kvargs.so.24.1 00:01:55.519 [189/267] Linking static target drivers/librte_bus_vdev.a 00:01:55.519 [190/267] Linking static target lib/librte_mbuf.a 00:01:55.519 [191/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.519 [192/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.519 [193/267] Linking static target lib/librte_hash.a 00:01:55.519 [194/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.519 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.519 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.519 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.519 [198/267] Linking static target drivers/librte_bus_pci.a 00:01:55.519 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:55.519 [200/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.780 [201/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.780 [202/267] Linking static target lib/librte_cryptodev.a 00:01:55.780 [203/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.780 [204/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.780 [205/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.780 [206/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.780 [207/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.780 [208/267] Linking static target drivers/librte_mempool_ring.a 00:01:55.780 [209/267] Linking target lib/librte_telemetry.so.24.1 00:01:55.780 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:55.780 [211/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.780 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.780 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:55.780 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.041 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.041 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.041 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.041 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.041 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.041 [220/267] Linking static target lib/librte_ethdev.a 00:01:56.302 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.302 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.302 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.562 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.562 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.562 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.132 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:57.132 [228/267] Linking static target lib/librte_vhost.a 00:01:58.071 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.451 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.022 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.962 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.963 [233/267] Linking target lib/librte_eal.so.24.1 00:02:06.963 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:07.223 [235/267] Linking target lib/librte_ring.so.24.1 00:02:07.223 [236/267] Linking target lib/librte_timer.so.24.1 00:02:07.223 [237/267] Linking target lib/librte_meter.so.24.1 00:02:07.223 [238/267] Linking target lib/librte_pci.so.24.1 00:02:07.223 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:07.223 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:07.223 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:07.223 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:07.223 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:07.223 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:07.223 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:07.223 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:07.223 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:07.223 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:07.483 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:07.483 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:07.483 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:07.483 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:07.483 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:07.744 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:07.744 [255/267] Linking target lib/librte_net.so.24.1 00:02:07.744 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:07.744 [257/267] Linking target lib/librte_compressdev.so.24.1 00:02:07.744 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:07.744 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:07.744 [260/267] Linking target lib/librte_hash.so.24.1 00:02:07.744 [261/267] Linking target lib/librte_security.so.24.1 00:02:07.744 [262/267] Linking target lib/librte_cmdline.so.24.1 00:02:07.744 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:08.005 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:08.005 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:08.005 [266/267] Linking target lib/librte_power.so.24.1 00:02:08.005 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:08.005 INFO: autodetecting backend as ninja 00:02:08.005 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:11.308 CC lib/log/log.o 00:02:11.308 CC lib/ut_mock/mock.o 00:02:11.308 CC lib/log/log_deprecated.o 00:02:11.308 CC lib/log/log_flags.o 00:02:11.308 CC lib/ut/ut.o 00:02:11.308 LIB libspdk_ut_mock.a 00:02:11.308 LIB libspdk_log.a 00:02:11.308 SO libspdk_ut_mock.so.6.0 00:02:11.308 LIB libspdk_ut.a 00:02:11.308 SO libspdk_log.so.7.1 00:02:11.308 SO libspdk_ut.so.2.0 00:02:11.308 SYMLINK libspdk_ut_mock.so 00:02:11.308 SYMLINK libspdk_ut.so 00:02:11.308 SYMLINK libspdk_log.so 00:02:11.569 CXX lib/trace_parser/trace.o 00:02:11.829 CC lib/dma/dma.o 00:02:11.829 CC lib/ioat/ioat.o 00:02:11.829 CC lib/util/base64.o 00:02:11.829 CC lib/util/bit_array.o 00:02:11.829 CC lib/util/cpuset.o 00:02:11.829 CC lib/util/crc16.o 00:02:11.829 CC lib/util/crc32.o 00:02:11.829 CC lib/util/crc32c.o 00:02:11.829 CC lib/util/crc32_ieee.o 00:02:11.829 CC lib/util/crc64.o 00:02:11.829 CC lib/util/dif.o 00:02:11.829 CC lib/util/file.o 00:02:11.829 CC lib/util/fd.o 00:02:11.829 CC lib/util/fd_group.o 00:02:11.829 CC lib/util/hexlify.o 00:02:11.829 CC lib/util/iov.o 00:02:11.829 CC lib/util/math.o 00:02:11.829 CC lib/util/net.o 00:02:11.829 CC lib/util/pipe.o 00:02:11.829 CC lib/util/strerror_tls.o 00:02:11.829 CC lib/util/string.o 00:02:11.829 CC lib/util/uuid.o 00:02:11.829 CC lib/util/xor.o 00:02:11.829 CC lib/util/zipf.o 00:02:11.829 CC lib/util/md5.o 00:02:11.829 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.830 CC lib/vfio_user/host/vfio_user.o 00:02:11.830 LIB libspdk_dma.a 00:02:11.830 SO libspdk_dma.so.5.0 00:02:12.090 LIB libspdk_ioat.a 00:02:12.090 SYMLINK libspdk_dma.so 00:02:12.090 SO libspdk_ioat.so.7.0 00:02:12.090 LIB libspdk_vfio_user.a 00:02:12.090 SYMLINK libspdk_ioat.so 00:02:12.090 SO libspdk_vfio_user.so.5.0 00:02:12.090 SYMLINK libspdk_vfio_user.so 00:02:12.351 LIB libspdk_util.a 00:02:12.351 SO libspdk_util.so.10.1 00:02:12.351 SYMLINK libspdk_util.so 00:02:12.351 LIB libspdk_trace_parser.a 00:02:12.612 SO libspdk_trace_parser.so.6.0 00:02:12.612 SYMLINK libspdk_trace_parser.so 00:02:12.871 CC lib/conf/conf.o 00:02:12.871 CC lib/rdma_utils/rdma_utils.o 00:02:12.871 CC lib/vmd/vmd.o 00:02:12.871 CC lib/vmd/led.o 00:02:12.871 CC lib/env_dpdk/env.o 00:02:12.871 CC lib/idxd/idxd.o 00:02:12.871 CC lib/env_dpdk/memory.o 00:02:12.871 CC lib/json/json_parse.o 00:02:12.871 CC lib/idxd/idxd_user.o 00:02:12.871 CC lib/env_dpdk/pci.o 00:02:12.871 CC lib/json/json_util.o 00:02:12.871 CC lib/idxd/idxd_kernel.o 00:02:12.871 CC lib/env_dpdk/init.o 00:02:12.871 CC lib/json/json_write.o 00:02:12.871 CC lib/env_dpdk/threads.o 00:02:12.871 CC lib/env_dpdk/pci_ioat.o 00:02:12.871 CC lib/env_dpdk/pci_virtio.o 00:02:12.871 CC lib/env_dpdk/pci_vmd.o 00:02:12.871 CC lib/env_dpdk/pci_idxd.o 00:02:12.871 CC lib/env_dpdk/pci_event.o 00:02:12.871 CC lib/env_dpdk/sigbus_handler.o 00:02:12.871 CC lib/env_dpdk/pci_dpdk.o 00:02:12.871 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:12.871 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:13.132 LIB libspdk_conf.a 00:02:13.132 SO libspdk_conf.so.6.0 00:02:13.132 LIB libspdk_rdma_utils.a 00:02:13.132 LIB libspdk_json.a 00:02:13.132 SYMLINK libspdk_conf.so 00:02:13.132 SO libspdk_rdma_utils.so.1.0 00:02:13.132 SO libspdk_json.so.6.0 00:02:13.132 SYMLINK libspdk_rdma_utils.so 00:02:13.132 SYMLINK libspdk_json.so 00:02:13.393 LIB libspdk_idxd.a 00:02:13.393 SO libspdk_idxd.so.12.1 00:02:13.393 LIB libspdk_vmd.a 00:02:13.393 SO libspdk_vmd.so.6.0 00:02:13.393 SYMLINK libspdk_idxd.so 00:02:13.393 SYMLINK libspdk_vmd.so 00:02:13.654 CC lib/rdma_provider/common.o 00:02:13.654 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:13.654 CC lib/jsonrpc/jsonrpc_server.o 00:02:13.654 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:13.654 CC lib/jsonrpc/jsonrpc_client.o 00:02:13.654 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:13.654 LIB libspdk_rdma_provider.a 00:02:13.914 SO libspdk_rdma_provider.so.7.0 00:02:13.914 LIB libspdk_jsonrpc.a 00:02:13.914 SO libspdk_jsonrpc.so.6.0 00:02:13.914 SYMLINK libspdk_rdma_provider.so 00:02:13.914 SYMLINK libspdk_jsonrpc.so 00:02:13.914 LIB libspdk_env_dpdk.a 00:02:14.176 SO libspdk_env_dpdk.so.15.1 00:02:14.176 SYMLINK libspdk_env_dpdk.so 00:02:14.176 CC lib/rpc/rpc.o 00:02:14.438 LIB libspdk_rpc.a 00:02:14.438 SO libspdk_rpc.so.6.0 00:02:14.699 SYMLINK libspdk_rpc.so 00:02:14.959 CC lib/keyring/keyring.o 00:02:14.959 CC lib/keyring/keyring_rpc.o 00:02:14.959 CC lib/trace/trace.o 00:02:14.959 CC lib/notify/notify.o 00:02:14.959 CC lib/trace/trace_flags.o 00:02:14.959 CC lib/trace/trace_rpc.o 00:02:14.959 CC lib/notify/notify_rpc.o 00:02:15.220 LIB libspdk_notify.a 00:02:15.220 SO libspdk_notify.so.6.0 00:02:15.220 LIB libspdk_keyring.a 00:02:15.220 LIB libspdk_trace.a 00:02:15.220 SO libspdk_keyring.so.2.0 00:02:15.220 SO libspdk_trace.so.11.0 00:02:15.220 SYMLINK libspdk_notify.so 00:02:15.220 SYMLINK libspdk_keyring.so 00:02:15.220 SYMLINK libspdk_trace.so 00:02:15.793 CC lib/sock/sock.o 00:02:15.793 CC lib/sock/sock_rpc.o 00:02:15.793 CC lib/thread/thread.o 00:02:15.793 CC lib/thread/iobuf.o 00:02:16.055 LIB libspdk_sock.a 00:02:16.055 SO libspdk_sock.so.10.0 00:02:16.055 SYMLINK libspdk_sock.so 00:02:16.628 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:16.628 CC lib/nvme/nvme_ctrlr.o 00:02:16.628 CC lib/nvme/nvme_fabric.o 00:02:16.628 CC lib/nvme/nvme_ns_cmd.o 00:02:16.628 CC lib/nvme/nvme_ns.o 00:02:16.628 CC lib/nvme/nvme_pcie_common.o 00:02:16.628 CC lib/nvme/nvme_pcie.o 00:02:16.628 CC lib/nvme/nvme_qpair.o 00:02:16.628 CC lib/nvme/nvme.o 00:02:16.628 CC lib/nvme/nvme_quirks.o 00:02:16.628 CC lib/nvme/nvme_transport.o 00:02:16.628 CC lib/nvme/nvme_discovery.o 00:02:16.628 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:16.628 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:16.628 CC lib/nvme/nvme_tcp.o 00:02:16.628 CC lib/nvme/nvme_opal.o 00:02:16.628 CC lib/nvme/nvme_io_msg.o 00:02:16.628 CC lib/nvme/nvme_poll_group.o 00:02:16.628 CC lib/nvme/nvme_zns.o 00:02:16.628 CC lib/nvme/nvme_stubs.o 00:02:16.628 CC lib/nvme/nvme_auth.o 00:02:16.628 CC lib/nvme/nvme_vfio_user.o 00:02:16.628 CC lib/nvme/nvme_cuse.o 00:02:16.628 CC lib/nvme/nvme_rdma.o 00:02:16.888 LIB libspdk_thread.a 00:02:16.888 SO libspdk_thread.so.11.0 00:02:17.147 SYMLINK libspdk_thread.so 00:02:17.406 CC lib/init/json_config.o 00:02:17.406 CC lib/init/rpc.o 00:02:17.406 CC lib/init/subsystem.o 00:02:17.406 CC lib/init/subsystem_rpc.o 00:02:17.406 CC lib/virtio/virtio.o 00:02:17.406 CC lib/virtio/virtio_vhost_user.o 00:02:17.406 CC lib/virtio/virtio_vfio_user.o 00:02:17.406 CC lib/virtio/virtio_pci.o 00:02:17.406 CC lib/blob/blobstore.o 00:02:17.406 CC lib/blob/request.o 00:02:17.406 CC lib/blob/zeroes.o 00:02:17.406 CC lib/blob/blob_bs_dev.o 00:02:17.406 CC lib/fsdev/fsdev.o 00:02:17.406 CC lib/fsdev/fsdev_io.o 00:02:17.406 CC lib/fsdev/fsdev_rpc.o 00:02:17.406 CC lib/accel/accel.o 00:02:17.406 CC lib/vfu_tgt/tgt_endpoint.o 00:02:17.406 CC lib/accel/accel_rpc.o 00:02:17.406 CC lib/vfu_tgt/tgt_rpc.o 00:02:17.406 CC lib/accel/accel_sw.o 00:02:17.667 LIB libspdk_init.a 00:02:17.667 SO libspdk_init.so.6.0 00:02:17.667 SYMLINK libspdk_init.so 00:02:17.667 LIB libspdk_virtio.a 00:02:17.667 LIB libspdk_vfu_tgt.a 00:02:17.667 SO libspdk_virtio.so.7.0 00:02:17.928 SO libspdk_vfu_tgt.so.3.0 00:02:17.928 SYMLINK libspdk_virtio.so 00:02:17.928 SYMLINK libspdk_vfu_tgt.so 00:02:17.928 LIB libspdk_fsdev.a 00:02:17.928 SO libspdk_fsdev.so.2.0 00:02:18.189 CC lib/event/app.o 00:02:18.189 CC lib/event/reactor.o 00:02:18.189 CC lib/event/log_rpc.o 00:02:18.189 CC lib/event/app_rpc.o 00:02:18.189 CC lib/event/scheduler_static.o 00:02:18.189 SYMLINK libspdk_fsdev.so 00:02:18.451 LIB libspdk_accel.a 00:02:18.451 LIB libspdk_nvme.a 00:02:18.451 SO libspdk_accel.so.16.0 00:02:18.451 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:18.451 LIB libspdk_event.a 00:02:18.451 SYMLINK libspdk_accel.so 00:02:18.451 SO libspdk_nvme.so.15.0 00:02:18.451 SO libspdk_event.so.14.0 00:02:18.712 SYMLINK libspdk_event.so 00:02:18.712 SYMLINK libspdk_nvme.so 00:02:18.973 CC lib/bdev/bdev.o 00:02:18.973 CC lib/bdev/bdev_rpc.o 00:02:18.973 CC lib/bdev/bdev_zone.o 00:02:18.973 CC lib/bdev/part.o 00:02:18.973 CC lib/bdev/scsi_nvme.o 00:02:18.973 LIB libspdk_fuse_dispatcher.a 00:02:19.234 SO libspdk_fuse_dispatcher.so.1.0 00:02:19.234 SYMLINK libspdk_fuse_dispatcher.so 00:02:20.176 LIB libspdk_blob.a 00:02:20.176 SO libspdk_blob.so.11.0 00:02:20.176 SYMLINK libspdk_blob.so 00:02:20.437 CC lib/blobfs/blobfs.o 00:02:20.437 CC lib/blobfs/tree.o 00:02:20.437 CC lib/lvol/lvol.o 00:02:21.380 LIB libspdk_bdev.a 00:02:21.380 LIB libspdk_blobfs.a 00:02:21.380 SO libspdk_bdev.so.17.0 00:02:21.380 SO libspdk_blobfs.so.10.0 00:02:21.380 SYMLINK libspdk_bdev.so 00:02:21.380 LIB libspdk_lvol.a 00:02:21.380 SYMLINK libspdk_blobfs.so 00:02:21.380 SO libspdk_lvol.so.10.0 00:02:21.380 SYMLINK libspdk_lvol.so 00:02:21.641 CC lib/nbd/nbd.o 00:02:21.641 CC lib/nbd/nbd_rpc.o 00:02:21.641 CC lib/nvmf/ctrlr.o 00:02:21.641 CC lib/scsi/dev.o 00:02:21.641 CC lib/nvmf/ctrlr_discovery.o 00:02:21.641 CC lib/scsi/lun.o 00:02:21.641 CC lib/nvmf/ctrlr_bdev.o 00:02:21.641 CC lib/ublk/ublk.o 00:02:21.641 CC lib/nvmf/subsystem.o 00:02:21.641 CC lib/scsi/port.o 00:02:21.641 CC lib/ublk/ublk_rpc.o 00:02:21.641 CC lib/ftl/ftl_core.o 00:02:21.641 CC lib/nvmf/nvmf.o 00:02:21.641 CC lib/scsi/scsi.o 00:02:21.641 CC lib/ftl/ftl_init.o 00:02:21.641 CC lib/nvmf/nvmf_rpc.o 00:02:21.641 CC lib/nvmf/transport.o 00:02:21.641 CC lib/scsi/scsi_bdev.o 00:02:21.641 CC lib/ftl/ftl_layout.o 00:02:21.641 CC lib/nvmf/tcp.o 00:02:21.641 CC lib/ftl/ftl_debug.o 00:02:21.641 CC lib/nvmf/mdns_server.o 00:02:21.641 CC lib/scsi/scsi_pr.o 00:02:21.641 CC lib/nvmf/stubs.o 00:02:21.641 CC lib/ftl/ftl_io.o 00:02:21.641 CC lib/scsi/scsi_rpc.o 00:02:21.641 CC lib/ftl/ftl_sb.o 00:02:21.641 CC lib/scsi/task.o 00:02:21.641 CC lib/nvmf/vfio_user.o 00:02:21.641 CC lib/ftl/ftl_l2p.o 00:02:21.641 CC lib/ftl/ftl_l2p_flat.o 00:02:21.641 CC lib/nvmf/rdma.o 00:02:21.641 CC lib/nvmf/auth.o 00:02:21.641 CC lib/ftl/ftl_nv_cache.o 00:02:21.641 CC lib/ftl/ftl_band.o 00:02:21.641 CC lib/ftl/ftl_band_ops.o 00:02:21.641 CC lib/ftl/ftl_writer.o 00:02:21.641 CC lib/ftl/ftl_rq.o 00:02:21.641 CC lib/ftl/ftl_reloc.o 00:02:21.641 CC lib/ftl/ftl_l2p_cache.o 00:02:21.641 CC lib/ftl/ftl_p2l.o 00:02:21.641 CC lib/ftl/ftl_p2l_log.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:21.641 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:21.641 CC lib/ftl/utils/ftl_conf.o 00:02:21.641 CC lib/ftl/utils/ftl_md.o 00:02:21.641 CC lib/ftl/utils/ftl_mempool.o 00:02:21.641 CC lib/ftl/utils/ftl_property.o 00:02:21.641 CC lib/ftl/utils/ftl_bitmap.o 00:02:21.641 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:21.641 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:21.641 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:21.641 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:21.641 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:21.641 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:21.900 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:21.900 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:21.900 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:21.900 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:21.900 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:21.900 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:21.900 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:21.900 CC lib/ftl/base/ftl_base_dev.o 00:02:21.900 CC lib/ftl/ftl_trace.o 00:02:21.900 CC lib/ftl/base/ftl_base_bdev.o 00:02:22.159 LIB libspdk_nbd.a 00:02:22.159 SO libspdk_nbd.so.7.0 00:02:22.420 LIB libspdk_scsi.a 00:02:22.420 SYMLINK libspdk_nbd.so 00:02:22.420 SO libspdk_scsi.so.9.0 00:02:22.420 LIB libspdk_ublk.a 00:02:22.420 SYMLINK libspdk_scsi.so 00:02:22.420 SO libspdk_ublk.so.3.0 00:02:22.420 SYMLINK libspdk_ublk.so 00:02:22.681 LIB libspdk_ftl.a 00:02:22.681 CC lib/vhost/vhost_scsi.o 00:02:22.681 CC lib/vhost/vhost.o 00:02:22.681 CC lib/vhost/vhost_rpc.o 00:02:22.681 CC lib/vhost/rte_vhost_user.o 00:02:22.681 CC lib/vhost/vhost_blk.o 00:02:22.681 CC lib/iscsi/conn.o 00:02:22.681 CC lib/iscsi/init_grp.o 00:02:22.681 CC lib/iscsi/iscsi.o 00:02:22.681 CC lib/iscsi/param.o 00:02:22.681 CC lib/iscsi/portal_grp.o 00:02:22.681 CC lib/iscsi/tgt_node.o 00:02:22.681 CC lib/iscsi/iscsi_subsystem.o 00:02:22.681 CC lib/iscsi/iscsi_rpc.o 00:02:22.681 CC lib/iscsi/task.o 00:02:22.941 SO libspdk_ftl.so.9.0 00:02:23.203 SYMLINK libspdk_ftl.so 00:02:23.777 LIB libspdk_nvmf.a 00:02:23.777 SO libspdk_nvmf.so.20.0 00:02:23.777 LIB libspdk_vhost.a 00:02:23.777 SO libspdk_vhost.so.8.0 00:02:24.038 SYMLINK libspdk_nvmf.so 00:02:24.038 SYMLINK libspdk_vhost.so 00:02:24.038 LIB libspdk_iscsi.a 00:02:24.038 SO libspdk_iscsi.so.8.0 00:02:24.298 SYMLINK libspdk_iscsi.so 00:02:24.867 CC module/env_dpdk/env_dpdk_rpc.o 00:02:24.867 CC module/vfu_device/vfu_virtio.o 00:02:24.867 CC module/vfu_device/vfu_virtio_blk.o 00:02:24.867 CC module/vfu_device/vfu_virtio_scsi.o 00:02:24.867 CC module/vfu_device/vfu_virtio_rpc.o 00:02:24.867 CC module/vfu_device/vfu_virtio_fs.o 00:02:24.867 CC module/accel/error/accel_error.o 00:02:24.867 CC module/accel/error/accel_error_rpc.o 00:02:24.867 CC module/accel/dsa/accel_dsa.o 00:02:24.867 CC module/accel/dsa/accel_dsa_rpc.o 00:02:24.867 CC module/accel/ioat/accel_ioat.o 00:02:24.867 CC module/accel/iaa/accel_iaa.o 00:02:24.867 CC module/accel/ioat/accel_ioat_rpc.o 00:02:24.867 CC module/accel/iaa/accel_iaa_rpc.o 00:02:24.867 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:24.867 LIB libspdk_env_dpdk_rpc.a 00:02:24.867 CC module/blob/bdev/blob_bdev.o 00:02:24.867 CC module/keyring/linux/keyring.o 00:02:24.867 CC module/keyring/linux/keyring_rpc.o 00:02:24.867 CC module/fsdev/aio/fsdev_aio.o 00:02:24.867 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:24.867 CC module/sock/posix/posix.o 00:02:24.867 CC module/fsdev/aio/linux_aio_mgr.o 00:02:24.867 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:24.867 CC module/keyring/file/keyring.o 00:02:24.867 CC module/keyring/file/keyring_rpc.o 00:02:24.867 CC module/scheduler/gscheduler/gscheduler.o 00:02:24.867 SO libspdk_env_dpdk_rpc.so.6.0 00:02:25.126 SYMLINK libspdk_env_dpdk_rpc.so 00:02:25.126 LIB libspdk_accel_error.a 00:02:25.126 LIB libspdk_keyring_linux.a 00:02:25.126 LIB libspdk_scheduler_dpdk_governor.a 00:02:25.127 LIB libspdk_scheduler_gscheduler.a 00:02:25.127 LIB libspdk_keyring_file.a 00:02:25.127 SO libspdk_accel_error.so.2.0 00:02:25.127 LIB libspdk_accel_ioat.a 00:02:25.127 SO libspdk_keyring_linux.so.1.0 00:02:25.127 LIB libspdk_accel_iaa.a 00:02:25.127 LIB libspdk_scheduler_dynamic.a 00:02:25.127 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:25.127 SO libspdk_scheduler_gscheduler.so.4.0 00:02:25.127 SO libspdk_keyring_file.so.2.0 00:02:25.127 SO libspdk_accel_ioat.so.6.0 00:02:25.127 SYMLINK libspdk_keyring_linux.so 00:02:25.127 SO libspdk_accel_iaa.so.3.0 00:02:25.127 SO libspdk_scheduler_dynamic.so.4.0 00:02:25.127 SYMLINK libspdk_accel_error.so 00:02:25.127 LIB libspdk_blob_bdev.a 00:02:25.127 LIB libspdk_accel_dsa.a 00:02:25.127 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:25.387 SYMLINK libspdk_scheduler_gscheduler.so 00:02:25.387 SO libspdk_blob_bdev.so.11.0 00:02:25.387 SYMLINK libspdk_accel_ioat.so 00:02:25.387 SYMLINK libspdk_keyring_file.so 00:02:25.387 SO libspdk_accel_dsa.so.5.0 00:02:25.387 SYMLINK libspdk_accel_iaa.so 00:02:25.387 SYMLINK libspdk_scheduler_dynamic.so 00:02:25.387 SYMLINK libspdk_blob_bdev.so 00:02:25.387 LIB libspdk_vfu_device.a 00:02:25.387 SYMLINK libspdk_accel_dsa.so 00:02:25.387 SO libspdk_vfu_device.so.3.0 00:02:25.387 SYMLINK libspdk_vfu_device.so 00:02:25.648 LIB libspdk_fsdev_aio.a 00:02:25.648 SO libspdk_fsdev_aio.so.1.0 00:02:25.648 LIB libspdk_sock_posix.a 00:02:25.648 SO libspdk_sock_posix.so.6.0 00:02:25.648 SYMLINK libspdk_fsdev_aio.so 00:02:25.908 SYMLINK libspdk_sock_posix.so 00:02:25.908 CC module/blobfs/bdev/blobfs_bdev.o 00:02:25.908 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:25.908 CC module/bdev/delay/vbdev_delay.o 00:02:25.909 CC module/bdev/error/vbdev_error.o 00:02:25.909 CC module/bdev/error/vbdev_error_rpc.o 00:02:25.909 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:25.909 CC module/bdev/gpt/gpt.o 00:02:25.909 CC module/bdev/gpt/vbdev_gpt.o 00:02:25.909 CC module/bdev/ftl/bdev_ftl.o 00:02:25.909 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:25.909 CC module/bdev/raid/bdev_raid.o 00:02:25.909 CC module/bdev/raid/bdev_raid_rpc.o 00:02:25.909 CC module/bdev/malloc/bdev_malloc.o 00:02:25.909 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:25.909 CC module/bdev/aio/bdev_aio.o 00:02:25.909 CC module/bdev/raid/bdev_raid_sb.o 00:02:25.909 CC module/bdev/raid/raid0.o 00:02:25.909 CC module/bdev/aio/bdev_aio_rpc.o 00:02:25.909 CC module/bdev/raid/raid1.o 00:02:25.909 CC module/bdev/raid/concat.o 00:02:25.909 CC module/bdev/lvol/vbdev_lvol.o 00:02:25.909 CC module/bdev/null/bdev_null.o 00:02:25.909 CC module/bdev/split/vbdev_split.o 00:02:25.909 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:25.909 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:25.909 CC module/bdev/split/vbdev_split_rpc.o 00:02:25.909 CC module/bdev/null/bdev_null_rpc.o 00:02:25.909 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:25.909 CC module/bdev/nvme/bdev_nvme.o 00:02:25.909 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:25.909 CC module/bdev/passthru/vbdev_passthru.o 00:02:25.909 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:25.909 CC module/bdev/nvme/nvme_rpc.o 00:02:25.909 CC module/bdev/nvme/bdev_mdns_client.o 00:02:25.909 CC module/bdev/nvme/vbdev_opal.o 00:02:25.909 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:25.909 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:25.909 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:25.909 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:25.909 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:25.909 CC module/bdev/iscsi/bdev_iscsi.o 00:02:25.909 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:26.170 LIB libspdk_blobfs_bdev.a 00:02:26.170 SO libspdk_blobfs_bdev.so.6.0 00:02:26.170 LIB libspdk_bdev_split.a 00:02:26.170 LIB libspdk_bdev_null.a 00:02:26.170 SO libspdk_bdev_split.so.6.0 00:02:26.170 LIB libspdk_bdev_delay.a 00:02:26.170 LIB libspdk_bdev_gpt.a 00:02:26.170 LIB libspdk_bdev_error.a 00:02:26.170 SYMLINK libspdk_blobfs_bdev.so 00:02:26.170 SO libspdk_bdev_null.so.6.0 00:02:26.170 LIB libspdk_bdev_ftl.a 00:02:26.170 LIB libspdk_bdev_passthru.a 00:02:26.170 SO libspdk_bdev_gpt.so.6.0 00:02:26.170 SO libspdk_bdev_delay.so.6.0 00:02:26.170 SO libspdk_bdev_error.so.6.0 00:02:26.170 SO libspdk_bdev_passthru.so.6.0 00:02:26.170 SO libspdk_bdev_ftl.so.6.0 00:02:26.170 LIB libspdk_bdev_zone_block.a 00:02:26.170 SYMLINK libspdk_bdev_split.so 00:02:26.170 LIB libspdk_bdev_aio.a 00:02:26.170 SYMLINK libspdk_bdev_null.so 00:02:26.170 SYMLINK libspdk_bdev_gpt.so 00:02:26.430 SYMLINK libspdk_bdev_delay.so 00:02:26.430 LIB libspdk_bdev_malloc.a 00:02:26.430 SO libspdk_bdev_zone_block.so.6.0 00:02:26.430 LIB libspdk_bdev_iscsi.a 00:02:26.430 SYMLINK libspdk_bdev_error.so 00:02:26.430 SO libspdk_bdev_aio.so.6.0 00:02:26.430 SYMLINK libspdk_bdev_passthru.so 00:02:26.430 SYMLINK libspdk_bdev_ftl.so 00:02:26.430 SO libspdk_bdev_malloc.so.6.0 00:02:26.430 SO libspdk_bdev_iscsi.so.6.0 00:02:26.430 SYMLINK libspdk_bdev_aio.so 00:02:26.430 SYMLINK libspdk_bdev_zone_block.so 00:02:26.430 LIB libspdk_bdev_lvol.a 00:02:26.430 SYMLINK libspdk_bdev_malloc.so 00:02:26.430 SYMLINK libspdk_bdev_iscsi.so 00:02:26.430 LIB libspdk_bdev_virtio.a 00:02:26.430 SO libspdk_bdev_lvol.so.6.0 00:02:26.430 SO libspdk_bdev_virtio.so.6.0 00:02:26.430 SYMLINK libspdk_bdev_lvol.so 00:02:26.690 SYMLINK libspdk_bdev_virtio.so 00:02:26.950 LIB libspdk_bdev_raid.a 00:02:26.950 SO libspdk_bdev_raid.so.6.0 00:02:26.950 SYMLINK libspdk_bdev_raid.so 00:02:28.333 LIB libspdk_bdev_nvme.a 00:02:28.333 SO libspdk_bdev_nvme.so.7.1 00:02:28.333 SYMLINK libspdk_bdev_nvme.so 00:02:28.905 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:28.905 CC module/event/subsystems/keyring/keyring.o 00:02:28.905 CC module/event/subsystems/sock/sock.o 00:02:28.905 CC module/event/subsystems/scheduler/scheduler.o 00:02:28.905 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:28.905 CC module/event/subsystems/iobuf/iobuf.o 00:02:28.905 CC module/event/subsystems/vmd/vmd.o 00:02:28.905 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:28.905 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:28.905 CC module/event/subsystems/fsdev/fsdev.o 00:02:29.167 LIB libspdk_event_keyring.a 00:02:29.167 LIB libspdk_event_vfu_tgt.a 00:02:29.167 LIB libspdk_event_sock.a 00:02:29.167 LIB libspdk_event_iobuf.a 00:02:29.167 LIB libspdk_event_vhost_blk.a 00:02:29.167 LIB libspdk_event_scheduler.a 00:02:29.167 LIB libspdk_event_fsdev.a 00:02:29.167 LIB libspdk_event_vmd.a 00:02:29.167 SO libspdk_event_keyring.so.1.0 00:02:29.167 SO libspdk_event_iobuf.so.3.0 00:02:29.167 SO libspdk_event_vfu_tgt.so.3.0 00:02:29.167 SO libspdk_event_vhost_blk.so.3.0 00:02:29.167 SO libspdk_event_sock.so.5.0 00:02:29.167 SO libspdk_event_fsdev.so.1.0 00:02:29.167 SO libspdk_event_scheduler.so.4.0 00:02:29.167 SO libspdk_event_vmd.so.6.0 00:02:29.167 SYMLINK libspdk_event_keyring.so 00:02:29.167 SYMLINK libspdk_event_fsdev.so 00:02:29.167 SYMLINK libspdk_event_sock.so 00:02:29.167 SYMLINK libspdk_event_vhost_blk.so 00:02:29.167 SYMLINK libspdk_event_vfu_tgt.so 00:02:29.167 SYMLINK libspdk_event_iobuf.so 00:02:29.167 SYMLINK libspdk_event_scheduler.so 00:02:29.167 SYMLINK libspdk_event_vmd.so 00:02:29.739 CC module/event/subsystems/accel/accel.o 00:02:29.739 LIB libspdk_event_accel.a 00:02:29.739 SO libspdk_event_accel.so.6.0 00:02:29.739 SYMLINK libspdk_event_accel.so 00:02:30.318 CC module/event/subsystems/bdev/bdev.o 00:02:30.318 LIB libspdk_event_bdev.a 00:02:30.318 SO libspdk_event_bdev.so.6.0 00:02:30.318 SYMLINK libspdk_event_bdev.so 00:02:30.894 CC module/event/subsystems/ublk/ublk.o 00:02:30.894 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:30.894 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:30.895 CC module/event/subsystems/scsi/scsi.o 00:02:30.895 CC module/event/subsystems/nbd/nbd.o 00:02:30.895 LIB libspdk_event_ublk.a 00:02:30.895 LIB libspdk_event_nbd.a 00:02:30.895 LIB libspdk_event_scsi.a 00:02:30.895 SO libspdk_event_ublk.so.3.0 00:02:30.895 SO libspdk_event_nbd.so.6.0 00:02:31.155 SO libspdk_event_scsi.so.6.0 00:02:31.155 LIB libspdk_event_nvmf.a 00:02:31.155 SYMLINK libspdk_event_nbd.so 00:02:31.155 SYMLINK libspdk_event_ublk.so 00:02:31.155 SO libspdk_event_nvmf.so.6.0 00:02:31.155 SYMLINK libspdk_event_scsi.so 00:02:31.155 SYMLINK libspdk_event_nvmf.so 00:02:31.416 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:31.416 CC module/event/subsystems/iscsi/iscsi.o 00:02:31.677 LIB libspdk_event_vhost_scsi.a 00:02:31.677 LIB libspdk_event_iscsi.a 00:02:31.677 SO libspdk_event_vhost_scsi.so.3.0 00:02:31.677 SO libspdk_event_iscsi.so.6.0 00:02:31.677 SYMLINK libspdk_event_iscsi.so 00:02:31.677 SYMLINK libspdk_event_vhost_scsi.so 00:02:31.938 SO libspdk.so.6.0 00:02:31.938 SYMLINK libspdk.so 00:02:32.199 CC app/spdk_nvme_perf/perf.o 00:02:32.199 CC test/rpc_client/rpc_client_test.o 00:02:32.199 CC app/spdk_top/spdk_top.o 00:02:32.199 TEST_HEADER include/spdk/assert.h 00:02:32.199 CC app/spdk_lspci/spdk_lspci.o 00:02:32.200 TEST_HEADER include/spdk/accel.h 00:02:32.200 TEST_HEADER include/spdk/accel_module.h 00:02:32.200 CXX app/trace/trace.o 00:02:32.200 TEST_HEADER include/spdk/base64.h 00:02:32.200 TEST_HEADER include/spdk/barrier.h 00:02:32.462 TEST_HEADER include/spdk/bdev.h 00:02:32.462 TEST_HEADER include/spdk/bdev_module.h 00:02:32.462 TEST_HEADER include/spdk/bdev_zone.h 00:02:32.462 CC app/trace_record/trace_record.o 00:02:32.462 TEST_HEADER include/spdk/bit_array.h 00:02:32.462 TEST_HEADER include/spdk/bit_pool.h 00:02:32.462 TEST_HEADER include/spdk/blob_bdev.h 00:02:32.462 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:32.462 TEST_HEADER include/spdk/blobfs.h 00:02:32.462 TEST_HEADER include/spdk/blob.h 00:02:32.462 TEST_HEADER include/spdk/conf.h 00:02:32.462 CC app/spdk_nvme_discover/discovery_aer.o 00:02:32.462 TEST_HEADER include/spdk/config.h 00:02:32.462 TEST_HEADER include/spdk/crc16.h 00:02:32.462 CC app/spdk_nvme_identify/identify.o 00:02:32.462 TEST_HEADER include/spdk/cpuset.h 00:02:32.462 TEST_HEADER include/spdk/crc64.h 00:02:32.462 TEST_HEADER include/spdk/crc32.h 00:02:32.462 TEST_HEADER include/spdk/dif.h 00:02:32.462 TEST_HEADER include/spdk/endian.h 00:02:32.462 TEST_HEADER include/spdk/dma.h 00:02:32.462 TEST_HEADER include/spdk/env_dpdk.h 00:02:32.462 TEST_HEADER include/spdk/env.h 00:02:32.462 TEST_HEADER include/spdk/event.h 00:02:32.462 TEST_HEADER include/spdk/fd_group.h 00:02:32.462 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:32.462 TEST_HEADER include/spdk/fd.h 00:02:32.462 TEST_HEADER include/spdk/file.h 00:02:32.462 TEST_HEADER include/spdk/fsdev.h 00:02:32.462 TEST_HEADER include/spdk/ftl.h 00:02:32.462 TEST_HEADER include/spdk/fsdev_module.h 00:02:32.462 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:32.462 TEST_HEADER include/spdk/gpt_spec.h 00:02:32.462 TEST_HEADER include/spdk/hexlify.h 00:02:32.462 TEST_HEADER include/spdk/histogram_data.h 00:02:32.462 TEST_HEADER include/spdk/idxd.h 00:02:32.462 CC app/iscsi_tgt/iscsi_tgt.o 00:02:32.462 TEST_HEADER include/spdk/init.h 00:02:32.462 TEST_HEADER include/spdk/idxd_spec.h 00:02:32.462 TEST_HEADER include/spdk/ioat.h 00:02:32.462 TEST_HEADER include/spdk/iscsi_spec.h 00:02:32.462 TEST_HEADER include/spdk/ioat_spec.h 00:02:32.462 TEST_HEADER include/spdk/jsonrpc.h 00:02:32.462 TEST_HEADER include/spdk/json.h 00:02:32.462 CC app/nvmf_tgt/nvmf_main.o 00:02:32.462 TEST_HEADER include/spdk/keyring.h 00:02:32.462 TEST_HEADER include/spdk/keyring_module.h 00:02:32.462 CC app/spdk_dd/spdk_dd.o 00:02:32.462 TEST_HEADER include/spdk/likely.h 00:02:32.462 TEST_HEADER include/spdk/log.h 00:02:32.462 TEST_HEADER include/spdk/lvol.h 00:02:32.462 TEST_HEADER include/spdk/md5.h 00:02:32.462 TEST_HEADER include/spdk/memory.h 00:02:32.462 TEST_HEADER include/spdk/mmio.h 00:02:32.462 TEST_HEADER include/spdk/nbd.h 00:02:32.462 TEST_HEADER include/spdk/net.h 00:02:32.462 TEST_HEADER include/spdk/notify.h 00:02:32.462 TEST_HEADER include/spdk/nvme.h 00:02:32.462 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:32.462 TEST_HEADER include/spdk/nvme_intel.h 00:02:32.462 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:32.462 TEST_HEADER include/spdk/nvme_spec.h 00:02:32.462 TEST_HEADER include/spdk/nvme_zns.h 00:02:32.462 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:32.462 TEST_HEADER include/spdk/nvmf.h 00:02:32.462 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:32.462 TEST_HEADER include/spdk/nvmf_spec.h 00:02:32.462 CC app/spdk_tgt/spdk_tgt.o 00:02:32.462 TEST_HEADER include/spdk/nvmf_transport.h 00:02:32.462 TEST_HEADER include/spdk/opal_spec.h 00:02:32.462 TEST_HEADER include/spdk/opal.h 00:02:32.462 TEST_HEADER include/spdk/pci_ids.h 00:02:32.462 TEST_HEADER include/spdk/pipe.h 00:02:32.462 TEST_HEADER include/spdk/queue.h 00:02:32.462 TEST_HEADER include/spdk/reduce.h 00:02:32.462 TEST_HEADER include/spdk/scheduler.h 00:02:32.462 TEST_HEADER include/spdk/rpc.h 00:02:32.462 TEST_HEADER include/spdk/scsi.h 00:02:32.462 TEST_HEADER include/spdk/scsi_spec.h 00:02:32.462 TEST_HEADER include/spdk/sock.h 00:02:32.462 TEST_HEADER include/spdk/stdinc.h 00:02:32.462 TEST_HEADER include/spdk/thread.h 00:02:32.462 TEST_HEADER include/spdk/string.h 00:02:32.462 TEST_HEADER include/spdk/trace_parser.h 00:02:32.462 TEST_HEADER include/spdk/trace.h 00:02:32.462 TEST_HEADER include/spdk/tree.h 00:02:32.462 TEST_HEADER include/spdk/ublk.h 00:02:32.462 TEST_HEADER include/spdk/util.h 00:02:32.462 TEST_HEADER include/spdk/version.h 00:02:32.462 TEST_HEADER include/spdk/uuid.h 00:02:32.462 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:32.462 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:32.462 TEST_HEADER include/spdk/vhost.h 00:02:32.462 TEST_HEADER include/spdk/vmd.h 00:02:32.462 TEST_HEADER include/spdk/xor.h 00:02:32.462 TEST_HEADER include/spdk/zipf.h 00:02:32.462 CXX test/cpp_headers/accel.o 00:02:32.462 CXX test/cpp_headers/assert.o 00:02:32.462 CXX test/cpp_headers/accel_module.o 00:02:32.462 CXX test/cpp_headers/barrier.o 00:02:32.462 CXX test/cpp_headers/base64.o 00:02:32.462 CXX test/cpp_headers/bdev.o 00:02:32.462 CXX test/cpp_headers/bdev_module.o 00:02:32.462 CXX test/cpp_headers/bdev_zone.o 00:02:32.462 CXX test/cpp_headers/bit_array.o 00:02:32.462 CXX test/cpp_headers/bit_pool.o 00:02:32.462 CXX test/cpp_headers/blob_bdev.o 00:02:32.462 CXX test/cpp_headers/blobfs_bdev.o 00:02:32.462 CXX test/cpp_headers/blobfs.o 00:02:32.462 CXX test/cpp_headers/blob.o 00:02:32.462 CXX test/cpp_headers/conf.o 00:02:32.462 CXX test/cpp_headers/cpuset.o 00:02:32.462 CXX test/cpp_headers/config.o 00:02:32.462 CXX test/cpp_headers/crc16.o 00:02:32.462 CXX test/cpp_headers/crc32.o 00:02:32.462 CXX test/cpp_headers/crc64.o 00:02:32.462 CXX test/cpp_headers/dif.o 00:02:32.462 CXX test/cpp_headers/endian.o 00:02:32.462 CXX test/cpp_headers/dma.o 00:02:32.462 CXX test/cpp_headers/env_dpdk.o 00:02:32.462 CXX test/cpp_headers/env.o 00:02:32.462 CXX test/cpp_headers/event.o 00:02:32.462 CXX test/cpp_headers/fd_group.o 00:02:32.462 CXX test/cpp_headers/fd.o 00:02:32.462 CXX test/cpp_headers/fsdev_module.o 00:02:32.462 CXX test/cpp_headers/file.o 00:02:32.462 CXX test/cpp_headers/fsdev.o 00:02:32.462 CXX test/cpp_headers/gpt_spec.o 00:02:32.462 CXX test/cpp_headers/fuse_dispatcher.o 00:02:32.462 CXX test/cpp_headers/ftl.o 00:02:32.462 CXX test/cpp_headers/hexlify.o 00:02:32.462 CXX test/cpp_headers/histogram_data.o 00:02:32.462 CXX test/cpp_headers/idxd_spec.o 00:02:32.462 CXX test/cpp_headers/idxd.o 00:02:32.462 CXX test/cpp_headers/ioat.o 00:02:32.462 CXX test/cpp_headers/init.o 00:02:32.462 CC test/thread/poller_perf/poller_perf.o 00:02:32.462 CXX test/cpp_headers/iscsi_spec.o 00:02:32.462 CXX test/cpp_headers/ioat_spec.o 00:02:32.462 CXX test/cpp_headers/jsonrpc.o 00:02:32.462 CXX test/cpp_headers/json.o 00:02:32.462 CXX test/cpp_headers/keyring.o 00:02:32.462 CXX test/cpp_headers/log.o 00:02:32.462 CC test/app/jsoncat/jsoncat.o 00:02:32.462 CXX test/cpp_headers/md5.o 00:02:32.462 CXX test/cpp_headers/keyring_module.o 00:02:32.462 CXX test/cpp_headers/lvol.o 00:02:32.462 CXX test/cpp_headers/likely.o 00:02:32.462 CXX test/cpp_headers/mmio.o 00:02:32.462 CXX test/cpp_headers/memory.o 00:02:32.462 CXX test/cpp_headers/nbd.o 00:02:32.462 CC test/app/histogram_perf/histogram_perf.o 00:02:32.462 CXX test/cpp_headers/net.o 00:02:32.462 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:32.463 CXX test/cpp_headers/notify.o 00:02:32.463 CXX test/cpp_headers/nvme_spec.o 00:02:32.463 CXX test/cpp_headers/nvme.o 00:02:32.463 CXX test/cpp_headers/nvme_intel.o 00:02:32.463 CXX test/cpp_headers/nvme_ocssd.o 00:02:32.463 CXX test/cpp_headers/nvme_zns.o 00:02:32.463 CXX test/cpp_headers/nvmf.o 00:02:32.463 CC test/app/stub/stub.o 00:02:32.463 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:32.463 CXX test/cpp_headers/nvmf_cmd.o 00:02:32.463 CXX test/cpp_headers/nvmf_spec.o 00:02:32.463 CXX test/cpp_headers/opal.o 00:02:32.463 CXX test/cpp_headers/nvmf_transport.o 00:02:32.463 CC examples/util/zipf/zipf.o 00:02:32.463 CC test/env/pci/pci_ut.o 00:02:32.463 CXX test/cpp_headers/opal_spec.o 00:02:32.463 CXX test/cpp_headers/pci_ids.o 00:02:32.463 CXX test/cpp_headers/queue.o 00:02:32.463 CXX test/cpp_headers/pipe.o 00:02:32.463 CC test/env/memory/memory_ut.o 00:02:32.463 CXX test/cpp_headers/rpc.o 00:02:32.463 CXX test/cpp_headers/reduce.o 00:02:32.463 CXX test/cpp_headers/scsi.o 00:02:32.463 CXX test/cpp_headers/scheduler.o 00:02:32.463 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:32.463 CXX test/cpp_headers/stdinc.o 00:02:32.463 CXX test/cpp_headers/scsi_spec.o 00:02:32.463 LINK spdk_lspci 00:02:32.724 CXX test/cpp_headers/string.o 00:02:32.724 CXX test/cpp_headers/sock.o 00:02:32.724 CXX test/cpp_headers/thread.o 00:02:32.724 CXX test/cpp_headers/trace.o 00:02:32.724 CXX test/cpp_headers/trace_parser.o 00:02:32.724 CC examples/ioat/verify/verify.o 00:02:32.724 CXX test/cpp_headers/tree.o 00:02:32.724 CXX test/cpp_headers/ublk.o 00:02:32.724 CXX test/cpp_headers/util.o 00:02:32.724 CXX test/cpp_headers/uuid.o 00:02:32.724 CC test/env/vtophys/vtophys.o 00:02:32.724 CXX test/cpp_headers/version.o 00:02:32.724 CXX test/cpp_headers/vfio_user_pci.o 00:02:32.724 CXX test/cpp_headers/vfio_user_spec.o 00:02:32.724 CC examples/ioat/perf/perf.o 00:02:32.724 CXX test/cpp_headers/vhost.o 00:02:32.724 CXX test/cpp_headers/vmd.o 00:02:32.724 CXX test/cpp_headers/xor.o 00:02:32.724 CXX test/cpp_headers/zipf.o 00:02:32.724 CC app/fio/nvme/fio_plugin.o 00:02:32.724 CC test/dma/test_dma/test_dma.o 00:02:32.724 CC test/app/bdev_svc/bdev_svc.o 00:02:32.724 LINK rpc_client_test 00:02:32.724 CC app/fio/bdev/fio_plugin.o 00:02:32.724 LINK interrupt_tgt 00:02:32.724 LINK nvmf_tgt 00:02:32.724 LINK spdk_nvme_discover 00:02:32.724 LINK iscsi_tgt 00:02:32.985 LINK spdk_trace_record 00:02:32.985 CC test/env/mem_callbacks/mem_callbacks.o 00:02:32.985 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:32.985 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:32.985 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:32.985 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:32.985 LINK spdk_tgt 00:02:32.985 LINK poller_perf 00:02:32.985 LINK spdk_trace 00:02:33.243 LINK jsoncat 00:02:33.244 LINK histogram_perf 00:02:33.244 LINK vtophys 00:02:33.244 LINK zipf 00:02:33.244 LINK stub 00:02:33.244 LINK env_dpdk_post_init 00:02:33.244 LINK bdev_svc 00:02:33.244 LINK spdk_dd 00:02:33.244 LINK ioat_perf 00:02:33.244 LINK verify 00:02:33.504 LINK spdk_nvme_perf 00:02:33.504 CC test/event/reactor/reactor.o 00:02:33.504 CC test/event/reactor_perf/reactor_perf.o 00:02:33.504 CC test/event/event_perf/event_perf.o 00:02:33.504 CC app/vhost/vhost.o 00:02:33.504 LINK nvme_fuzz 00:02:33.504 CC test/event/app_repeat/app_repeat.o 00:02:33.504 LINK vhost_fuzz 00:02:33.504 CC test/event/scheduler/scheduler.o 00:02:33.504 LINK pci_ut 00:02:33.504 LINK test_dma 00:02:33.504 LINK spdk_bdev 00:02:33.765 LINK reactor 00:02:33.765 LINK reactor_perf 00:02:33.765 LINK spdk_nvme 00:02:33.765 LINK event_perf 00:02:33.765 LINK spdk_top 00:02:33.765 CC examples/vmd/lsvmd/lsvmd.o 00:02:33.765 CC examples/vmd/led/led.o 00:02:33.765 CC examples/sock/hello_world/hello_sock.o 00:02:33.765 LINK app_repeat 00:02:33.765 LINK vhost 00:02:33.765 LINK mem_callbacks 00:02:33.765 CC examples/idxd/perf/perf.o 00:02:33.765 LINK spdk_nvme_identify 00:02:33.765 CC examples/thread/thread/thread_ex.o 00:02:33.765 LINK scheduler 00:02:33.765 LINK lsvmd 00:02:34.025 LINK led 00:02:34.026 LINK hello_sock 00:02:34.026 LINK thread 00:02:34.026 LINK idxd_perf 00:02:34.026 LINK memory_ut 00:02:34.286 CC test/nvme/aer/aer.o 00:02:34.286 CC test/nvme/boot_partition/boot_partition.o 00:02:34.286 CC test/nvme/connect_stress/connect_stress.o 00:02:34.286 CC test/nvme/e2edp/nvme_dp.o 00:02:34.286 CC test/nvme/compliance/nvme_compliance.o 00:02:34.286 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:34.286 CC test/nvme/reserve/reserve.o 00:02:34.286 CC test/nvme/reset/reset.o 00:02:34.286 CC test/nvme/fdp/fdp.o 00:02:34.286 CC test/nvme/simple_copy/simple_copy.o 00:02:34.286 CC test/nvme/sgl/sgl.o 00:02:34.286 CC test/nvme/overhead/overhead.o 00:02:34.286 CC test/nvme/startup/startup.o 00:02:34.286 CC test/nvme/fused_ordering/fused_ordering.o 00:02:34.286 CC test/nvme/err_injection/err_injection.o 00:02:34.286 CC test/nvme/cuse/cuse.o 00:02:34.286 CC test/blobfs/mkfs/mkfs.o 00:02:34.286 CC test/accel/dif/dif.o 00:02:34.286 CC test/lvol/esnap/esnap.o 00:02:34.547 LINK boot_partition 00:02:34.547 LINK reserve 00:02:34.547 LINK startup 00:02:34.547 LINK doorbell_aers 00:02:34.547 LINK err_injection 00:02:34.547 LINK connect_stress 00:02:34.547 LINK fused_ordering 00:02:34.547 LINK mkfs 00:02:34.547 LINK simple_copy 00:02:34.547 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:34.547 LINK sgl 00:02:34.547 LINK reset 00:02:34.547 LINK nvme_dp 00:02:34.547 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:34.547 CC examples/nvme/reconnect/reconnect.o 00:02:34.547 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:34.547 LINK aer 00:02:34.547 CC examples/nvme/arbitration/arbitration.o 00:02:34.547 CC examples/nvme/abort/abort.o 00:02:34.547 CC examples/nvme/hotplug/hotplug.o 00:02:34.547 CC examples/nvme/hello_world/hello_world.o 00:02:34.547 LINK overhead 00:02:34.547 LINK nvme_compliance 00:02:34.547 LINK fdp 00:02:34.547 LINK iscsi_fuzz 00:02:34.547 CC examples/accel/perf/accel_perf.o 00:02:34.808 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:34.808 CC examples/blob/hello_world/hello_blob.o 00:02:34.808 CC examples/blob/cli/blobcli.o 00:02:34.808 LINK pmr_persistence 00:02:34.808 LINK cmb_copy 00:02:34.808 LINK hotplug 00:02:34.808 LINK hello_world 00:02:34.808 LINK reconnect 00:02:34.808 LINK arbitration 00:02:34.808 LINK dif 00:02:34.808 LINK abort 00:02:35.070 LINK nvme_manage 00:02:35.070 LINK hello_blob 00:02:35.070 LINK hello_fsdev 00:02:35.070 LINK accel_perf 00:02:35.070 LINK blobcli 00:02:35.333 LINK cuse 00:02:35.594 CC test/bdev/bdevio/bdevio.o 00:02:35.594 CC examples/bdev/hello_world/hello_bdev.o 00:02:35.594 CC examples/bdev/bdevperf/bdevperf.o 00:02:35.855 LINK bdevio 00:02:35.855 LINK hello_bdev 00:02:36.428 LINK bdevperf 00:02:37.000 CC examples/nvmf/nvmf/nvmf.o 00:02:37.261 LINK nvmf 00:02:39.179 LINK esnap 00:02:39.179 00:02:39.179 real 0m54.264s 00:02:39.179 user 7m47.519s 00:02:39.179 sys 4m24.586s 00:02:39.179 13:27:02 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:39.179 13:27:02 make -- common/autotest_common.sh@10 -- $ set +x 00:02:39.179 ************************************ 00:02:39.179 END TEST make 00:02:39.179 ************************************ 00:02:39.179 13:27:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:39.179 13:27:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:39.179 13:27:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:39.179 13:27:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.179 13:27:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:39.179 13:27:02 -- pm/common@44 -- $ pid=311857 00:02:39.179 13:27:02 -- pm/common@50 -- $ kill -TERM 311857 00:02:39.179 13:27:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.179 13:27:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:39.179 13:27:02 -- pm/common@44 -- $ pid=311858 00:02:39.179 13:27:02 -- pm/common@50 -- $ kill -TERM 311858 00:02:39.179 13:27:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.179 13:27:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:39.179 13:27:02 -- pm/common@44 -- $ pid=311860 00:02:39.179 13:27:02 -- pm/common@50 -- $ kill -TERM 311860 00:02:39.179 13:27:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.179 13:27:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:39.179 13:27:02 -- pm/common@44 -- $ pid=311884 00:02:39.179 13:27:02 -- pm/common@50 -- $ sudo -E kill -TERM 311884 00:02:39.179 13:27:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:39.179 13:27:02 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:39.442 13:27:02 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:39.442 13:27:02 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:39.442 13:27:02 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:39.442 13:27:02 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:39.442 13:27:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:39.442 13:27:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:39.442 13:27:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:39.442 13:27:02 -- scripts/common.sh@336 -- # IFS=.-: 00:02:39.442 13:27:02 -- scripts/common.sh@336 -- # read -ra ver1 00:02:39.442 13:27:02 -- scripts/common.sh@337 -- # IFS=.-: 00:02:39.442 13:27:02 -- scripts/common.sh@337 -- # read -ra ver2 00:02:39.442 13:27:02 -- scripts/common.sh@338 -- # local 'op=<' 00:02:39.442 13:27:02 -- scripts/common.sh@340 -- # ver1_l=2 00:02:39.442 13:27:02 -- scripts/common.sh@341 -- # ver2_l=1 00:02:39.442 13:27:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:39.442 13:27:02 -- scripts/common.sh@344 -- # case "$op" in 00:02:39.442 13:27:02 -- scripts/common.sh@345 -- # : 1 00:02:39.442 13:27:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:39.442 13:27:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:39.442 13:27:02 -- scripts/common.sh@365 -- # decimal 1 00:02:39.442 13:27:02 -- scripts/common.sh@353 -- # local d=1 00:02:39.442 13:27:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:39.442 13:27:02 -- scripts/common.sh@355 -- # echo 1 00:02:39.442 13:27:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:39.442 13:27:02 -- scripts/common.sh@366 -- # decimal 2 00:02:39.442 13:27:02 -- scripts/common.sh@353 -- # local d=2 00:02:39.442 13:27:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:39.442 13:27:02 -- scripts/common.sh@355 -- # echo 2 00:02:39.442 13:27:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:39.442 13:27:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:39.442 13:27:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:39.442 13:27:02 -- scripts/common.sh@368 -- # return 0 00:02:39.442 13:27:02 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:39.442 13:27:02 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:39.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.442 --rc genhtml_branch_coverage=1 00:02:39.442 --rc genhtml_function_coverage=1 00:02:39.442 --rc genhtml_legend=1 00:02:39.442 --rc geninfo_all_blocks=1 00:02:39.442 --rc geninfo_unexecuted_blocks=1 00:02:39.442 00:02:39.442 ' 00:02:39.442 13:27:02 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:39.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.442 --rc genhtml_branch_coverage=1 00:02:39.442 --rc genhtml_function_coverage=1 00:02:39.442 --rc genhtml_legend=1 00:02:39.442 --rc geninfo_all_blocks=1 00:02:39.442 --rc geninfo_unexecuted_blocks=1 00:02:39.442 00:02:39.442 ' 00:02:39.442 13:27:02 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:39.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.442 --rc genhtml_branch_coverage=1 00:02:39.442 --rc genhtml_function_coverage=1 00:02:39.442 --rc genhtml_legend=1 00:02:39.442 --rc geninfo_all_blocks=1 00:02:39.442 --rc geninfo_unexecuted_blocks=1 00:02:39.442 00:02:39.442 ' 00:02:39.442 13:27:02 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:39.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.442 --rc genhtml_branch_coverage=1 00:02:39.442 --rc genhtml_function_coverage=1 00:02:39.442 --rc genhtml_legend=1 00:02:39.442 --rc geninfo_all_blocks=1 00:02:39.442 --rc geninfo_unexecuted_blocks=1 00:02:39.442 00:02:39.442 ' 00:02:39.442 13:27:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:39.442 13:27:02 -- nvmf/common.sh@7 -- # uname -s 00:02:39.442 13:27:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:39.442 13:27:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:39.442 13:27:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:39.442 13:27:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:39.442 13:27:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:39.442 13:27:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:39.442 13:27:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:39.442 13:27:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:39.442 13:27:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:39.442 13:27:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:39.442 13:27:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:39.442 13:27:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:39.442 13:27:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:39.442 13:27:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:39.442 13:27:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:39.442 13:27:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:39.442 13:27:02 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:39.442 13:27:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:39.442 13:27:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:39.442 13:27:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:39.442 13:27:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:39.442 13:27:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.442 13:27:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.442 13:27:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.442 13:27:02 -- paths/export.sh@5 -- # export PATH 00:02:39.442 13:27:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.442 13:27:02 -- nvmf/common.sh@51 -- # : 0 00:02:39.442 13:27:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:39.442 13:27:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:39.442 13:27:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:39.442 13:27:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:39.442 13:27:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:39.442 13:27:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:39.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:39.442 13:27:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:39.442 13:27:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:39.442 13:27:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:39.442 13:27:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:39.442 13:27:02 -- spdk/autotest.sh@32 -- # uname -s 00:02:39.442 13:27:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:39.443 13:27:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:39.443 13:27:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.443 13:27:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:39.443 13:27:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.443 13:27:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:39.443 13:27:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:39.443 13:27:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:39.443 13:27:02 -- spdk/autotest.sh@48 -- # udevadm_pid=377178 00:02:39.443 13:27:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:39.443 13:27:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:39.443 13:27:02 -- pm/common@17 -- # local monitor 00:02:39.443 13:27:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.443 13:27:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.443 13:27:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.443 13:27:02 -- pm/common@21 -- # date +%s 00:02:39.443 13:27:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.443 13:27:02 -- pm/common@21 -- # date +%s 00:02:39.443 13:27:02 -- pm/common@25 -- # sleep 1 00:02:39.443 13:27:02 -- pm/common@21 -- # date +%s 00:02:39.443 13:27:02 -- pm/common@21 -- # date +%s 00:02:39.443 13:27:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730896022 00:02:39.443 13:27:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730896022 00:02:39.443 13:27:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730896022 00:02:39.443 13:27:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730896022 00:02:39.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730896022_collect-vmstat.pm.log 00:02:39.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730896022_collect-cpu-load.pm.log 00:02:39.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730896022_collect-cpu-temp.pm.log 00:02:39.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730896022_collect-bmc-pm.bmc.pm.log 00:02:40.386 13:27:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:40.386 13:27:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:40.386 13:27:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:40.386 13:27:03 -- common/autotest_common.sh@10 -- # set +x 00:02:40.387 13:27:03 -- spdk/autotest.sh@59 -- # create_test_list 00:02:40.387 13:27:03 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:40.387 13:27:03 -- common/autotest_common.sh@10 -- # set +x 00:02:40.647 13:27:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:40.647 13:27:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.647 13:27:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.647 13:27:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:40.647 13:27:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.647 13:27:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:40.647 13:27:03 -- common/autotest_common.sh@1455 -- # uname 00:02:40.647 13:27:03 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:40.647 13:27:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:40.647 13:27:03 -- common/autotest_common.sh@1475 -- # uname 00:02:40.647 13:27:03 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:40.647 13:27:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:40.647 13:27:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:40.647 lcov: LCOV version 1.15 00:02:40.647 13:27:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:55.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:55.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:13.693 13:27:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:13.693 13:27:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:13.693 13:27:34 -- common/autotest_common.sh@10 -- # set +x 00:03:13.693 13:27:34 -- spdk/autotest.sh@78 -- # rm -f 00:03:13.693 13:27:34 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.264 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:14.264 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:14.264 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:14.524 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:14.524 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:14.524 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:14.524 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:14.524 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:14.784 13:27:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:14.784 13:27:38 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:14.784 13:27:38 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:14.784 13:27:38 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:14.784 13:27:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:14.784 13:27:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:14.784 13:27:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:14.784 13:27:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.784 13:27:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:14.784 13:27:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:14.784 13:27:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.784 13:27:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:14.784 13:27:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:14.784 13:27:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:14.784 13:27:38 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:14.784 No valid GPT data, bailing 00:03:14.784 13:27:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.784 13:27:38 -- scripts/common.sh@394 -- # pt= 00:03:14.784 13:27:38 -- scripts/common.sh@395 -- # return 1 00:03:14.784 13:27:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:14.784 1+0 records in 00:03:14.784 1+0 records out 00:03:14.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050655 s, 207 MB/s 00:03:14.784 13:27:38 -- spdk/autotest.sh@105 -- # sync 00:03:14.784 13:27:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.784 13:27:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.784 13:27:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:24.837 13:27:46 -- spdk/autotest.sh@111 -- # uname -s 00:03:24.837 13:27:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:24.837 13:27:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:24.837 13:27:46 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:26.753 Hugepages 00:03:26.753 node hugesize free / total 00:03:26.753 node0 1048576kB 0 / 0 00:03:26.753 node0 2048kB 0 / 0 00:03:26.753 node1 1048576kB 0 / 0 00:03:26.753 node1 2048kB 0 / 0 00:03:26.753 00:03:26.753 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:26.753 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:26.753 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:26.753 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:26.753 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:26.753 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:26.753 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:26.753 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:26.753 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:27.014 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:27.014 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:27.014 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:27.014 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:27.014 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:27.014 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:27.014 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:27.014 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:27.014 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:27.014 13:27:50 -- spdk/autotest.sh@117 -- # uname -s 00:03:27.014 13:27:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:27.014 13:27:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:27.014 13:27:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.317 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.317 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.317 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.317 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.317 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.317 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.317 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.317 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.578 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.578 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.578 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.578 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.578 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.578 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.578 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.578 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:32.494 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:32.756 13:27:55 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:33.698 13:27:56 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:33.698 13:27:56 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:33.698 13:27:56 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:33.698 13:27:56 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:33.698 13:27:56 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:33.698 13:27:56 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:33.698 13:27:56 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:33.698 13:27:56 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:33.698 13:27:56 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:33.698 13:27:56 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:33.698 13:27:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:33.698 13:27:56 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.003 Waiting for block devices as requested 00:03:37.264 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:37.264 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:37.264 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:37.264 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:37.525 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:37.525 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:37.525 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:37.786 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:37.786 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:38.047 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:38.047 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:38.047 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:38.047 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:38.307 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:38.307 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:38.307 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:38.307 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:38.877 13:28:01 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:38.877 13:28:01 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:38.877 13:28:01 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:38.877 13:28:01 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:38.877 13:28:01 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:38.877 13:28:01 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:38.878 13:28:01 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:38.878 13:28:01 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:38.878 13:28:01 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:38.878 13:28:01 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:38.878 13:28:01 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:38.878 13:28:01 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:38.878 13:28:01 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:38.878 13:28:02 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:38.878 13:28:02 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:38.878 13:28:02 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:38.878 13:28:02 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:38.878 13:28:02 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:38.878 13:28:02 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:38.878 13:28:02 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:38.878 13:28:02 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:38.878 13:28:02 -- common/autotest_common.sh@1541 -- # continue 00:03:38.878 13:28:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:38.878 13:28:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:38.878 13:28:02 -- common/autotest_common.sh@10 -- # set +x 00:03:38.878 13:28:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:38.878 13:28:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.878 13:28:02 -- common/autotest_common.sh@10 -- # set +x 00:03:38.878 13:28:02 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.181 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:42.181 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:42.441 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:42.441 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:42.441 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:42.441 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:42.441 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:42.702 13:28:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:42.702 13:28:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.702 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:03:42.702 13:28:06 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:42.702 13:28:06 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:42.702 13:28:06 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:42.702 13:28:06 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:42.702 13:28:06 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:42.702 13:28:06 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:42.702 13:28:06 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:42.702 13:28:06 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:42.702 13:28:06 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:42.702 13:28:06 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:42.702 13:28:06 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.702 13:28:06 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.702 13:28:06 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:42.964 13:28:06 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:42.964 13:28:06 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:42.964 13:28:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:42.964 13:28:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:42.964 13:28:06 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:42.964 13:28:06 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:42.964 13:28:06 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:42.964 13:28:06 -- common/autotest_common.sh@1570 -- # return 0 00:03:42.964 13:28:06 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:42.964 13:28:06 -- common/autotest_common.sh@1578 -- # return 0 00:03:42.964 13:28:06 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:42.964 13:28:06 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:42.964 13:28:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.964 13:28:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.964 13:28:06 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:42.964 13:28:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.964 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:03:42.964 13:28:06 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:42.964 13:28:06 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.964 13:28:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:42.964 13:28:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.964 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:03:42.964 ************************************ 00:03:42.964 START TEST env 00:03:42.964 ************************************ 00:03:42.964 13:28:06 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.964 * Looking for test storage... 00:03:42.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:42.964 13:28:06 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:42.964 13:28:06 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:42.964 13:28:06 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:43.225 13:28:06 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:43.225 13:28:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.225 13:28:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.225 13:28:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.225 13:28:06 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.225 13:28:06 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.225 13:28:06 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.225 13:28:06 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.225 13:28:06 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.225 13:28:06 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.225 13:28:06 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.225 13:28:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.225 13:28:06 env -- scripts/common.sh@344 -- # case "$op" in 00:03:43.225 13:28:06 env -- scripts/common.sh@345 -- # : 1 00:03:43.225 13:28:06 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.225 13:28:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.225 13:28:06 env -- scripts/common.sh@365 -- # decimal 1 00:03:43.225 13:28:06 env -- scripts/common.sh@353 -- # local d=1 00:03:43.225 13:28:06 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.225 13:28:06 env -- scripts/common.sh@355 -- # echo 1 00:03:43.225 13:28:06 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.225 13:28:06 env -- scripts/common.sh@366 -- # decimal 2 00:03:43.225 13:28:06 env -- scripts/common.sh@353 -- # local d=2 00:03:43.225 13:28:06 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.225 13:28:06 env -- scripts/common.sh@355 -- # echo 2 00:03:43.225 13:28:06 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.225 13:28:06 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.225 13:28:06 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.225 13:28:06 env -- scripts/common.sh@368 -- # return 0 00:03:43.225 13:28:06 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.225 13:28:06 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.225 --rc genhtml_branch_coverage=1 00:03:43.225 --rc genhtml_function_coverage=1 00:03:43.225 --rc genhtml_legend=1 00:03:43.225 --rc geninfo_all_blocks=1 00:03:43.225 --rc geninfo_unexecuted_blocks=1 00:03:43.225 00:03:43.225 ' 00:03:43.225 13:28:06 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.225 --rc genhtml_branch_coverage=1 00:03:43.225 --rc genhtml_function_coverage=1 00:03:43.225 --rc genhtml_legend=1 00:03:43.225 --rc geninfo_all_blocks=1 00:03:43.225 --rc geninfo_unexecuted_blocks=1 00:03:43.225 00:03:43.225 ' 00:03:43.225 13:28:06 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.225 --rc genhtml_branch_coverage=1 00:03:43.225 --rc genhtml_function_coverage=1 00:03:43.225 --rc genhtml_legend=1 00:03:43.225 --rc geninfo_all_blocks=1 00:03:43.225 --rc geninfo_unexecuted_blocks=1 00:03:43.225 00:03:43.225 ' 00:03:43.225 13:28:06 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.225 --rc genhtml_branch_coverage=1 00:03:43.225 --rc genhtml_function_coverage=1 00:03:43.225 --rc genhtml_legend=1 00:03:43.225 --rc geninfo_all_blocks=1 00:03:43.225 --rc geninfo_unexecuted_blocks=1 00:03:43.225 00:03:43.225 ' 00:03:43.225 13:28:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:43.225 13:28:06 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:43.225 13:28:06 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:43.225 13:28:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.225 ************************************ 00:03:43.225 START TEST env_memory 00:03:43.225 ************************************ 00:03:43.225 13:28:06 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:43.225 00:03:43.225 00:03:43.225 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.225 http://cunit.sourceforge.net/ 00:03:43.225 00:03:43.225 00:03:43.225 Suite: memory 00:03:43.225 Test: alloc and free memory map ...[2024-11-06 13:28:06.476593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:43.225 passed 00:03:43.225 Test: mem map translation ...[2024-11-06 13:28:06.501967] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:43.226 [2024-11-06 13:28:06.501988] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:43.226 [2024-11-06 13:28:06.502034] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:43.226 [2024-11-06 13:28:06.502041] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:43.226 passed 00:03:43.226 Test: mem map registration ...[2024-11-06 13:28:06.557022] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:43.226 [2024-11-06 13:28:06.557047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:43.226 passed 00:03:43.488 Test: mem map adjacent registrations ...passed 00:03:43.488 00:03:43.488 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.488 suites 1 1 n/a 0 0 00:03:43.488 tests 4 4 4 0 0 00:03:43.488 asserts 152 152 152 0 n/a 00:03:43.488 00:03:43.488 Elapsed time = 0.192 seconds 00:03:43.488 00:03:43.488 real 0m0.207s 00:03:43.488 user 0m0.199s 00:03:43.488 sys 0m0.007s 00:03:43.488 13:28:06 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:43.488 13:28:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:43.488 ************************************ 00:03:43.488 END TEST env_memory 00:03:43.488 ************************************ 00:03:43.488 13:28:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:43.488 13:28:06 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:43.488 13:28:06 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:43.488 13:28:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.488 ************************************ 00:03:43.488 START TEST env_vtophys 00:03:43.488 ************************************ 00:03:43.488 13:28:06 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:43.488 EAL: lib.eal log level changed from notice to debug 00:03:43.488 EAL: Detected lcore 0 as core 0 on socket 0 00:03:43.488 EAL: Detected lcore 1 as core 1 on socket 0 00:03:43.488 EAL: Detected lcore 2 as core 2 on socket 0 00:03:43.488 EAL: Detected lcore 3 as core 3 on socket 0 00:03:43.488 EAL: Detected lcore 4 as core 4 on socket 0 00:03:43.488 EAL: Detected lcore 5 as core 5 on socket 0 00:03:43.488 EAL: Detected lcore 6 as core 6 on socket 0 00:03:43.488 EAL: Detected lcore 7 as core 7 on socket 0 00:03:43.488 EAL: Detected lcore 8 as core 8 on socket 0 00:03:43.488 EAL: Detected lcore 9 as core 9 on socket 0 00:03:43.488 EAL: Detected lcore 10 as core 10 on socket 0 00:03:43.488 EAL: Detected lcore 11 as core 11 on socket 0 00:03:43.488 EAL: Detected lcore 12 as core 12 on socket 0 00:03:43.488 EAL: Detected lcore 13 as core 13 on socket 0 00:03:43.488 EAL: Detected lcore 14 as core 14 on socket 0 00:03:43.488 EAL: Detected lcore 15 as core 15 on socket 0 00:03:43.488 EAL: Detected lcore 16 as core 16 on socket 0 00:03:43.488 EAL: Detected lcore 17 as core 17 on socket 0 00:03:43.488 EAL: Detected lcore 18 as core 18 on socket 0 00:03:43.488 EAL: Detected lcore 19 as core 19 on socket 0 00:03:43.488 EAL: Detected lcore 20 as core 20 on socket 0 00:03:43.488 EAL: Detected lcore 21 as core 21 on socket 0 00:03:43.488 EAL: Detected lcore 22 as core 22 on socket 0 00:03:43.488 EAL: Detected lcore 23 as core 23 on socket 0 00:03:43.488 EAL: Detected lcore 24 as core 24 on socket 0 00:03:43.488 EAL: Detected lcore 25 as core 25 on socket 0 00:03:43.488 EAL: Detected lcore 26 as core 26 on socket 0 00:03:43.488 EAL: Detected lcore 27 as core 27 on socket 0 00:03:43.488 EAL: Detected lcore 28 as core 28 on socket 0 00:03:43.488 EAL: Detected lcore 29 as core 29 on socket 0 00:03:43.488 EAL: Detected lcore 30 as core 30 on socket 0 00:03:43.488 EAL: Detected lcore 31 as core 31 on socket 0 00:03:43.488 EAL: Detected lcore 32 as core 32 on socket 0 00:03:43.488 EAL: Detected lcore 33 as core 33 on socket 0 00:03:43.488 EAL: Detected lcore 34 as core 34 on socket 0 00:03:43.488 EAL: Detected lcore 35 as core 35 on socket 0 00:03:43.488 EAL: Detected lcore 36 as core 0 on socket 1 00:03:43.488 EAL: Detected lcore 37 as core 1 on socket 1 00:03:43.488 EAL: Detected lcore 38 as core 2 on socket 1 00:03:43.488 EAL: Detected lcore 39 as core 3 on socket 1 00:03:43.488 EAL: Detected lcore 40 as core 4 on socket 1 00:03:43.488 EAL: Detected lcore 41 as core 5 on socket 1 00:03:43.488 EAL: Detected lcore 42 as core 6 on socket 1 00:03:43.488 EAL: Detected lcore 43 as core 7 on socket 1 00:03:43.488 EAL: Detected lcore 44 as core 8 on socket 1 00:03:43.488 EAL: Detected lcore 45 as core 9 on socket 1 00:03:43.488 EAL: Detected lcore 46 as core 10 on socket 1 00:03:43.488 EAL: Detected lcore 47 as core 11 on socket 1 00:03:43.488 EAL: Detected lcore 48 as core 12 on socket 1 00:03:43.488 EAL: Detected lcore 49 as core 13 on socket 1 00:03:43.488 EAL: Detected lcore 50 as core 14 on socket 1 00:03:43.488 EAL: Detected lcore 51 as core 15 on socket 1 00:03:43.488 EAL: Detected lcore 52 as core 16 on socket 1 00:03:43.488 EAL: Detected lcore 53 as core 17 on socket 1 00:03:43.488 EAL: Detected lcore 54 as core 18 on socket 1 00:03:43.488 EAL: Detected lcore 55 as core 19 on socket 1 00:03:43.488 EAL: Detected lcore 56 as core 20 on socket 1 00:03:43.488 EAL: Detected lcore 57 as core 21 on socket 1 00:03:43.488 EAL: Detected lcore 58 as core 22 on socket 1 00:03:43.488 EAL: Detected lcore 59 as core 23 on socket 1 00:03:43.488 EAL: Detected lcore 60 as core 24 on socket 1 00:03:43.488 EAL: Detected lcore 61 as core 25 on socket 1 00:03:43.488 EAL: Detected lcore 62 as core 26 on socket 1 00:03:43.488 EAL: Detected lcore 63 as core 27 on socket 1 00:03:43.488 EAL: Detected lcore 64 as core 28 on socket 1 00:03:43.488 EAL: Detected lcore 65 as core 29 on socket 1 00:03:43.488 EAL: Detected lcore 66 as core 30 on socket 1 00:03:43.488 EAL: Detected lcore 67 as core 31 on socket 1 00:03:43.488 EAL: Detected lcore 68 as core 32 on socket 1 00:03:43.488 EAL: Detected lcore 69 as core 33 on socket 1 00:03:43.488 EAL: Detected lcore 70 as core 34 on socket 1 00:03:43.488 EAL: Detected lcore 71 as core 35 on socket 1 00:03:43.488 EAL: Detected lcore 72 as core 0 on socket 0 00:03:43.488 EAL: Detected lcore 73 as core 1 on socket 0 00:03:43.488 EAL: Detected lcore 74 as core 2 on socket 0 00:03:43.488 EAL: Detected lcore 75 as core 3 on socket 0 00:03:43.488 EAL: Detected lcore 76 as core 4 on socket 0 00:03:43.488 EAL: Detected lcore 77 as core 5 on socket 0 00:03:43.488 EAL: Detected lcore 78 as core 6 on socket 0 00:03:43.488 EAL: Detected lcore 79 as core 7 on socket 0 00:03:43.488 EAL: Detected lcore 80 as core 8 on socket 0 00:03:43.488 EAL: Detected lcore 81 as core 9 on socket 0 00:03:43.488 EAL: Detected lcore 82 as core 10 on socket 0 00:03:43.488 EAL: Detected lcore 83 as core 11 on socket 0 00:03:43.488 EAL: Detected lcore 84 as core 12 on socket 0 00:03:43.488 EAL: Detected lcore 85 as core 13 on socket 0 00:03:43.488 EAL: Detected lcore 86 as core 14 on socket 0 00:03:43.488 EAL: Detected lcore 87 as core 15 on socket 0 00:03:43.488 EAL: Detected lcore 88 as core 16 on socket 0 00:03:43.488 EAL: Detected lcore 89 as core 17 on socket 0 00:03:43.488 EAL: Detected lcore 90 as core 18 on socket 0 00:03:43.488 EAL: Detected lcore 91 as core 19 on socket 0 00:03:43.488 EAL: Detected lcore 92 as core 20 on socket 0 00:03:43.488 EAL: Detected lcore 93 as core 21 on socket 0 00:03:43.488 EAL: Detected lcore 94 as core 22 on socket 0 00:03:43.488 EAL: Detected lcore 95 as core 23 on socket 0 00:03:43.488 EAL: Detected lcore 96 as core 24 on socket 0 00:03:43.488 EAL: Detected lcore 97 as core 25 on socket 0 00:03:43.488 EAL: Detected lcore 98 as core 26 on socket 0 00:03:43.488 EAL: Detected lcore 99 as core 27 on socket 0 00:03:43.488 EAL: Detected lcore 100 as core 28 on socket 0 00:03:43.488 EAL: Detected lcore 101 as core 29 on socket 0 00:03:43.488 EAL: Detected lcore 102 as core 30 on socket 0 00:03:43.488 EAL: Detected lcore 103 as core 31 on socket 0 00:03:43.488 EAL: Detected lcore 104 as core 32 on socket 0 00:03:43.488 EAL: Detected lcore 105 as core 33 on socket 0 00:03:43.488 EAL: Detected lcore 106 as core 34 on socket 0 00:03:43.488 EAL: Detected lcore 107 as core 35 on socket 0 00:03:43.488 EAL: Detected lcore 108 as core 0 on socket 1 00:03:43.488 EAL: Detected lcore 109 as core 1 on socket 1 00:03:43.488 EAL: Detected lcore 110 as core 2 on socket 1 00:03:43.488 EAL: Detected lcore 111 as core 3 on socket 1 00:03:43.488 EAL: Detected lcore 112 as core 4 on socket 1 00:03:43.488 EAL: Detected lcore 113 as core 5 on socket 1 00:03:43.488 EAL: Detected lcore 114 as core 6 on socket 1 00:03:43.488 EAL: Detected lcore 115 as core 7 on socket 1 00:03:43.488 EAL: Detected lcore 116 as core 8 on socket 1 00:03:43.488 EAL: Detected lcore 117 as core 9 on socket 1 00:03:43.488 EAL: Detected lcore 118 as core 10 on socket 1 00:03:43.488 EAL: Detected lcore 119 as core 11 on socket 1 00:03:43.488 EAL: Detected lcore 120 as core 12 on socket 1 00:03:43.488 EAL: Detected lcore 121 as core 13 on socket 1 00:03:43.488 EAL: Detected lcore 122 as core 14 on socket 1 00:03:43.488 EAL: Detected lcore 123 as core 15 on socket 1 00:03:43.489 EAL: Detected lcore 124 as core 16 on socket 1 00:03:43.489 EAL: Detected lcore 125 as core 17 on socket 1 00:03:43.489 EAL: Detected lcore 126 as core 18 on socket 1 00:03:43.489 EAL: Detected lcore 127 as core 19 on socket 1 00:03:43.489 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:43.489 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:43.489 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:43.489 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:43.489 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:43.489 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:43.489 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:43.489 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:43.489 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:43.489 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:43.489 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:43.489 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:43.489 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:43.489 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:43.489 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:43.489 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:43.489 EAL: Maximum logical cores by configuration: 128 00:03:43.489 EAL: Detected CPU lcores: 128 00:03:43.489 EAL: Detected NUMA nodes: 2 00:03:43.489 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:43.489 EAL: Detected shared linkage of DPDK 00:03:43.489 EAL: No shared files mode enabled, IPC will be disabled 00:03:43.489 EAL: Bus pci wants IOVA as 'DC' 00:03:43.489 EAL: Buses did not request a specific IOVA mode. 00:03:43.489 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:43.489 EAL: Selected IOVA mode 'VA' 00:03:43.489 EAL: Probing VFIO support... 00:03:43.489 EAL: IOMMU type 1 (Type 1) is supported 00:03:43.489 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:43.489 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:43.489 EAL: VFIO support initialized 00:03:43.489 EAL: Ask a virtual area of 0x2e000 bytes 00:03:43.489 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:43.489 EAL: Setting up physically contiguous memory... 00:03:43.489 EAL: Setting maximum number of open files to 524288 00:03:43.489 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:43.489 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:43.489 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:43.489 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.489 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:43.489 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.489 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.489 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:43.489 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:43.489 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.489 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:43.489 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.489 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.489 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:43.489 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:43.489 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.489 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:43.489 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.489 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.489 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:43.489 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:43.489 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.489 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:43.489 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.489 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.489 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:43.489 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:43.489 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:43.489 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.489 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:43.489 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.489 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.489 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:43.489 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:43.489 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.489 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:43.489 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.489 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.489 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:43.489 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:43.489 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.489 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:43.489 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.489 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.489 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:43.489 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:43.489 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.489 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:43.489 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.489 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.489 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:43.489 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:43.489 EAL: Hugepages will be freed exactly as allocated. 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: TSC frequency is ~2400000 KHz 00:03:43.489 EAL: Main lcore 0 is ready (tid=7f1dcf1a4a00;cpuset=[0]) 00:03:43.489 EAL: Trying to obtain current memory policy. 00:03:43.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.489 EAL: Restoring previous memory policy: 0 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was expanded by 2MB 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:43.489 EAL: Mem event callback 'spdk:(nil)' registered 00:03:43.489 00:03:43.489 00:03:43.489 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.489 http://cunit.sourceforge.net/ 00:03:43.489 00:03:43.489 00:03:43.489 Suite: components_suite 00:03:43.489 Test: vtophys_malloc_test ...passed 00:03:43.489 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:43.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.489 EAL: Restoring previous memory policy: 4 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was expanded by 4MB 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was shrunk by 4MB 00:03:43.489 EAL: Trying to obtain current memory policy. 00:03:43.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.489 EAL: Restoring previous memory policy: 4 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was expanded by 6MB 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was shrunk by 6MB 00:03:43.489 EAL: Trying to obtain current memory policy. 00:03:43.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.489 EAL: Restoring previous memory policy: 4 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was expanded by 10MB 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was shrunk by 10MB 00:03:43.489 EAL: Trying to obtain current memory policy. 00:03:43.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.489 EAL: Restoring previous memory policy: 4 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was expanded by 18MB 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was shrunk by 18MB 00:03:43.489 EAL: Trying to obtain current memory policy. 00:03:43.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.489 EAL: Restoring previous memory policy: 4 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was expanded by 34MB 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.489 EAL: No shared files mode enabled, IPC is disabled 00:03:43.489 EAL: Heap on socket 0 was shrunk by 34MB 00:03:43.489 EAL: Trying to obtain current memory policy. 00:03:43.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.489 EAL: Restoring previous memory policy: 4 00:03:43.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.489 EAL: request: mp_malloc_sync 00:03:43.490 EAL: No shared files mode enabled, IPC is disabled 00:03:43.490 EAL: Heap on socket 0 was expanded by 66MB 00:03:43.490 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.490 EAL: request: mp_malloc_sync 00:03:43.490 EAL: No shared files mode enabled, IPC is disabled 00:03:43.490 EAL: Heap on socket 0 was shrunk by 66MB 00:03:43.490 EAL: Trying to obtain current memory policy. 00:03:43.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.490 EAL: Restoring previous memory policy: 4 00:03:43.490 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.490 EAL: request: mp_malloc_sync 00:03:43.490 EAL: No shared files mode enabled, IPC is disabled 00:03:43.490 EAL: Heap on socket 0 was expanded by 130MB 00:03:43.750 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.750 EAL: request: mp_malloc_sync 00:03:43.750 EAL: No shared files mode enabled, IPC is disabled 00:03:43.750 EAL: Heap on socket 0 was shrunk by 130MB 00:03:43.750 EAL: Trying to obtain current memory policy. 00:03:43.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.750 EAL: Restoring previous memory policy: 4 00:03:43.750 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.750 EAL: request: mp_malloc_sync 00:03:43.750 EAL: No shared files mode enabled, IPC is disabled 00:03:43.750 EAL: Heap on socket 0 was expanded by 258MB 00:03:43.750 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.750 EAL: request: mp_malloc_sync 00:03:43.750 EAL: No shared files mode enabled, IPC is disabled 00:03:43.750 EAL: Heap on socket 0 was shrunk by 258MB 00:03:43.750 EAL: Trying to obtain current memory policy. 00:03:43.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.750 EAL: Restoring previous memory policy: 4 00:03:43.750 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.750 EAL: request: mp_malloc_sync 00:03:43.750 EAL: No shared files mode enabled, IPC is disabled 00:03:43.750 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.750 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.010 EAL: request: mp_malloc_sync 00:03:44.010 EAL: No shared files mode enabled, IPC is disabled 00:03:44.010 EAL: Heap on socket 0 was shrunk by 514MB 00:03:44.010 EAL: Trying to obtain current memory policy. 00:03:44.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.010 EAL: Restoring previous memory policy: 4 00:03:44.010 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.010 EAL: request: mp_malloc_sync 00:03:44.010 EAL: No shared files mode enabled, IPC is disabled 00:03:44.010 EAL: Heap on socket 0 was expanded by 1026MB 00:03:44.010 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.271 EAL: request: mp_malloc_sync 00:03:44.271 EAL: No shared files mode enabled, IPC is disabled 00:03:44.271 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:44.271 passed 00:03:44.271 00:03:44.271 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.271 suites 1 1 n/a 0 0 00:03:44.271 tests 2 2 2 0 0 00:03:44.271 asserts 497 497 497 0 n/a 00:03:44.271 00:03:44.271 Elapsed time = 0.644 seconds 00:03:44.271 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.271 EAL: request: mp_malloc_sync 00:03:44.271 EAL: No shared files mode enabled, IPC is disabled 00:03:44.271 EAL: Heap on socket 0 was shrunk by 2MB 00:03:44.271 EAL: No shared files mode enabled, IPC is disabled 00:03:44.271 EAL: No shared files mode enabled, IPC is disabled 00:03:44.271 EAL: No shared files mode enabled, IPC is disabled 00:03:44.271 00:03:44.271 real 0m0.772s 00:03:44.271 user 0m0.410s 00:03:44.271 sys 0m0.337s 00:03:44.271 13:28:07 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.271 13:28:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:44.271 ************************************ 00:03:44.271 END TEST env_vtophys 00:03:44.271 ************************************ 00:03:44.271 13:28:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.271 13:28:07 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.271 13:28:07 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.271 13:28:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.271 ************************************ 00:03:44.271 START TEST env_pci 00:03:44.271 ************************************ 00:03:44.271 13:28:07 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.271 00:03:44.271 00:03:44.271 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.271 http://cunit.sourceforge.net/ 00:03:44.271 00:03:44.271 00:03:44.271 Suite: pci 00:03:44.271 Test: pci_hook ...[2024-11-06 13:28:07.577877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 396924 has claimed it 00:03:44.271 EAL: Cannot find device (10000:00:01.0) 00:03:44.271 EAL: Failed to attach device on primary process 00:03:44.271 passed 00:03:44.271 00:03:44.271 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.271 suites 1 1 n/a 0 0 00:03:44.271 tests 1 1 1 0 0 00:03:44.271 asserts 25 25 25 0 n/a 00:03:44.271 00:03:44.271 Elapsed time = 0.031 seconds 00:03:44.271 00:03:44.272 real 0m0.052s 00:03:44.272 user 0m0.018s 00:03:44.272 sys 0m0.034s 00:03:44.272 13:28:07 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.272 13:28:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:44.272 ************************************ 00:03:44.272 END TEST env_pci 00:03:44.272 ************************************ 00:03:44.531 13:28:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:44.531 13:28:07 env -- env/env.sh@15 -- # uname 00:03:44.531 13:28:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:44.531 13:28:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:44.531 13:28:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.531 13:28:07 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:44.531 13:28:07 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.531 13:28:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.531 ************************************ 00:03:44.531 START TEST env_dpdk_post_init 00:03:44.531 ************************************ 00:03:44.531 13:28:07 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.531 EAL: Detected CPU lcores: 128 00:03:44.531 EAL: Detected NUMA nodes: 2 00:03:44.531 EAL: Detected shared linkage of DPDK 00:03:44.531 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.531 EAL: Selected IOVA mode 'VA' 00:03:44.531 EAL: VFIO support initialized 00:03:44.531 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.531 EAL: Using IOMMU type 1 (Type 1) 00:03:44.792 EAL: Ignore mapping IO port bar(1) 00:03:44.792 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:45.052 EAL: Ignore mapping IO port bar(1) 00:03:45.052 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:45.052 EAL: Ignore mapping IO port bar(1) 00:03:45.312 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:45.312 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.833 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:45.833 EAL: Ignore mapping IO port bar(1) 00:03:45.833 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:46.093 EAL: Ignore mapping IO port bar(1) 00:03:46.093 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:46.354 EAL: Ignore mapping IO port bar(1) 00:03:46.354 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:46.615 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:46.615 EAL: Ignore mapping IO port bar(1) 00:03:46.874 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:46.874 EAL: Ignore mapping IO port bar(1) 00:03:47.135 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:47.135 EAL: Ignore mapping IO port bar(1) 00:03:47.396 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:47.396 EAL: Ignore mapping IO port bar(1) 00:03:47.396 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:47.656 EAL: Ignore mapping IO port bar(1) 00:03:47.656 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:47.917 EAL: Ignore mapping IO port bar(1) 00:03:47.917 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:48.177 EAL: Ignore mapping IO port bar(1) 00:03:48.177 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:48.177 EAL: Ignore mapping IO port bar(1) 00:03:48.437 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:48.437 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:48.437 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:48.437 Starting DPDK initialization... 00:03:48.437 Starting SPDK post initialization... 00:03:48.437 SPDK NVMe probe 00:03:48.437 Attaching to 0000:65:00.0 00:03:48.437 Attached to 0000:65:00.0 00:03:48.437 Cleaning up... 00:03:50.350 00:03:50.350 real 0m5.740s 00:03:50.350 user 0m0.108s 00:03:50.350 sys 0m0.174s 00:03:50.350 13:28:13 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.350 13:28:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:50.350 ************************************ 00:03:50.350 END TEST env_dpdk_post_init 00:03:50.350 ************************************ 00:03:50.350 13:28:13 env -- env/env.sh@26 -- # uname 00:03:50.350 13:28:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:50.350 13:28:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:50.350 13:28:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.350 13:28:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.350 13:28:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.350 ************************************ 00:03:50.350 START TEST env_mem_callbacks 00:03:50.350 ************************************ 00:03:50.350 13:28:13 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:50.350 EAL: Detected CPU lcores: 128 00:03:50.350 EAL: Detected NUMA nodes: 2 00:03:50.350 EAL: Detected shared linkage of DPDK 00:03:50.350 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:50.350 EAL: Selected IOVA mode 'VA' 00:03:50.350 EAL: VFIO support initialized 00:03:50.350 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:50.350 00:03:50.350 00:03:50.350 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.350 http://cunit.sourceforge.net/ 00:03:50.350 00:03:50.350 00:03:50.350 Suite: memory 00:03:50.350 Test: test ... 00:03:50.350 register 0x200000200000 2097152 00:03:50.350 malloc 3145728 00:03:50.350 register 0x200000400000 4194304 00:03:50.350 buf 0x200000500000 len 3145728 PASSED 00:03:50.350 malloc 64 00:03:50.350 buf 0x2000004fff40 len 64 PASSED 00:03:50.350 malloc 4194304 00:03:50.350 register 0x200000800000 6291456 00:03:50.350 buf 0x200000a00000 len 4194304 PASSED 00:03:50.350 free 0x200000500000 3145728 00:03:50.350 free 0x2000004fff40 64 00:03:50.350 unregister 0x200000400000 4194304 PASSED 00:03:50.350 free 0x200000a00000 4194304 00:03:50.350 unregister 0x200000800000 6291456 PASSED 00:03:50.350 malloc 8388608 00:03:50.350 register 0x200000400000 10485760 00:03:50.350 buf 0x200000600000 len 8388608 PASSED 00:03:50.350 free 0x200000600000 8388608 00:03:50.350 unregister 0x200000400000 10485760 PASSED 00:03:50.350 passed 00:03:50.350 00:03:50.350 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.350 suites 1 1 n/a 0 0 00:03:50.350 tests 1 1 1 0 0 00:03:50.350 asserts 15 15 15 0 n/a 00:03:50.350 00:03:50.350 Elapsed time = 0.004 seconds 00:03:50.350 00:03:50.350 real 0m0.058s 00:03:50.350 user 0m0.021s 00:03:50.350 sys 0m0.037s 00:03:50.350 13:28:13 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.350 13:28:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:50.350 ************************************ 00:03:50.350 END TEST env_mem_callbacks 00:03:50.350 ************************************ 00:03:50.350 00:03:50.350 real 0m7.446s 00:03:50.350 user 0m1.023s 00:03:50.350 sys 0m0.976s 00:03:50.350 13:28:13 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.350 13:28:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.350 ************************************ 00:03:50.350 END TEST env 00:03:50.350 ************************************ 00:03:50.350 13:28:13 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:50.350 13:28:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.350 13:28:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.350 13:28:13 -- common/autotest_common.sh@10 -- # set +x 00:03:50.350 ************************************ 00:03:50.350 START TEST rpc 00:03:50.350 ************************************ 00:03:50.350 13:28:13 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:50.611 * Looking for test storage... 00:03:50.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:50.611 13:28:13 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.611 13:28:13 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.611 13:28:13 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.611 13:28:13 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.611 13:28:13 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.611 13:28:13 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.611 13:28:13 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.611 13:28:13 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.611 13:28:13 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.611 13:28:13 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.611 13:28:13 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.611 13:28:13 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:50.611 13:28:13 rpc -- scripts/common.sh@345 -- # : 1 00:03:50.611 13:28:13 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.611 13:28:13 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.611 13:28:13 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:50.611 13:28:13 rpc -- scripts/common.sh@353 -- # local d=1 00:03:50.611 13:28:13 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.611 13:28:13 rpc -- scripts/common.sh@355 -- # echo 1 00:03:50.611 13:28:13 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.611 13:28:13 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:50.611 13:28:13 rpc -- scripts/common.sh@353 -- # local d=2 00:03:50.611 13:28:13 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.611 13:28:13 rpc -- scripts/common.sh@355 -- # echo 2 00:03:50.611 13:28:13 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.611 13:28:13 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.611 13:28:13 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.611 13:28:13 rpc -- scripts/common.sh@368 -- # return 0 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.611 --rc genhtml_branch_coverage=1 00:03:50.611 --rc genhtml_function_coverage=1 00:03:50.611 --rc genhtml_legend=1 00:03:50.611 --rc geninfo_all_blocks=1 00:03:50.611 --rc geninfo_unexecuted_blocks=1 00:03:50.611 00:03:50.611 ' 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.611 --rc genhtml_branch_coverage=1 00:03:50.611 --rc genhtml_function_coverage=1 00:03:50.611 --rc genhtml_legend=1 00:03:50.611 --rc geninfo_all_blocks=1 00:03:50.611 --rc geninfo_unexecuted_blocks=1 00:03:50.611 00:03:50.611 ' 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.611 --rc genhtml_branch_coverage=1 00:03:50.611 --rc genhtml_function_coverage=1 00:03:50.611 --rc genhtml_legend=1 00:03:50.611 --rc geninfo_all_blocks=1 00:03:50.611 --rc geninfo_unexecuted_blocks=1 00:03:50.611 00:03:50.611 ' 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.611 --rc genhtml_branch_coverage=1 00:03:50.611 --rc genhtml_function_coverage=1 00:03:50.611 --rc genhtml_legend=1 00:03:50.611 --rc geninfo_all_blocks=1 00:03:50.611 --rc geninfo_unexecuted_blocks=1 00:03:50.611 00:03:50.611 ' 00:03:50.611 13:28:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=398339 00:03:50.611 13:28:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.611 13:28:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 398339 00:03:50.611 13:28:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@833 -- # '[' -z 398339 ']' 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:50.611 13:28:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.611 [2024-11-06 13:28:13.943415] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:03:50.611 [2024-11-06 13:28:13.943466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398339 ] 00:03:50.871 [2024-11-06 13:28:14.013987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.871 [2024-11-06 13:28:14.049691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:50.871 [2024-11-06 13:28:14.049728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 398339' to capture a snapshot of events at runtime. 00:03:50.871 [2024-11-06 13:28:14.049736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:50.871 [2024-11-06 13:28:14.049742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:50.871 [2024-11-06 13:28:14.049755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid398339 for offline analysis/debug. 00:03:50.871 [2024-11-06 13:28:14.050347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.443 13:28:14 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:51.443 13:28:14 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:51.443 13:28:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.443 13:28:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.443 13:28:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:51.443 13:28:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:51.443 13:28:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.443 13:28:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.443 13:28:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.443 ************************************ 00:03:51.443 START TEST rpc_integrity 00:03:51.443 ************************************ 00:03:51.443 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:51.443 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.443 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.443 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.443 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.443 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.443 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.703 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.703 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.703 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.703 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.703 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.703 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:51.703 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.703 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.703 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.703 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.703 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.703 { 00:03:51.703 "name": "Malloc0", 00:03:51.703 "aliases": [ 00:03:51.703 "1c88f0d4-2e14-4d38-99a3-8c8cebc401a9" 00:03:51.703 ], 00:03:51.703 "product_name": "Malloc disk", 00:03:51.703 "block_size": 512, 00:03:51.703 "num_blocks": 16384, 00:03:51.703 "uuid": "1c88f0d4-2e14-4d38-99a3-8c8cebc401a9", 00:03:51.703 "assigned_rate_limits": { 00:03:51.703 "rw_ios_per_sec": 0, 00:03:51.703 "rw_mbytes_per_sec": 0, 00:03:51.703 "r_mbytes_per_sec": 0, 00:03:51.703 "w_mbytes_per_sec": 0 00:03:51.703 }, 00:03:51.703 "claimed": false, 00:03:51.703 "zoned": false, 00:03:51.703 "supported_io_types": { 00:03:51.703 "read": true, 00:03:51.703 "write": true, 00:03:51.703 "unmap": true, 00:03:51.703 "flush": true, 00:03:51.703 "reset": true, 00:03:51.703 "nvme_admin": false, 00:03:51.703 "nvme_io": false, 00:03:51.703 "nvme_io_md": false, 00:03:51.703 "write_zeroes": true, 00:03:51.703 "zcopy": true, 00:03:51.703 "get_zone_info": false, 00:03:51.703 "zone_management": false, 00:03:51.703 "zone_append": false, 00:03:51.703 "compare": false, 00:03:51.703 "compare_and_write": false, 00:03:51.703 "abort": true, 00:03:51.703 "seek_hole": false, 00:03:51.703 "seek_data": false, 00:03:51.703 "copy": true, 00:03:51.703 "nvme_iov_md": false 00:03:51.703 }, 00:03:51.703 "memory_domains": [ 00:03:51.703 { 00:03:51.703 "dma_device_id": "system", 00:03:51.703 "dma_device_type": 1 00:03:51.703 }, 00:03:51.703 { 00:03:51.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.703 "dma_device_type": 2 00:03:51.703 } 00:03:51.703 ], 00:03:51.703 "driver_specific": {} 00:03:51.703 } 00:03:51.703 ]' 00:03:51.703 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.703 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.703 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:51.703 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.703 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.703 [2024-11-06 13:28:14.917965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:51.704 [2024-11-06 13:28:14.917997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.704 [2024-11-06 13:28:14.918010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf68da0 00:03:51.704 [2024-11-06 13:28:14.918017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.704 [2024-11-06 13:28:14.919372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.704 [2024-11-06 13:28:14.919393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.704 Passthru0 00:03:51.704 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.704 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.704 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.704 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.704 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.704 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.704 { 00:03:51.704 "name": "Malloc0", 00:03:51.704 "aliases": [ 00:03:51.704 "1c88f0d4-2e14-4d38-99a3-8c8cebc401a9" 00:03:51.704 ], 00:03:51.704 "product_name": "Malloc disk", 00:03:51.704 "block_size": 512, 00:03:51.704 "num_blocks": 16384, 00:03:51.704 "uuid": "1c88f0d4-2e14-4d38-99a3-8c8cebc401a9", 00:03:51.704 "assigned_rate_limits": { 00:03:51.704 "rw_ios_per_sec": 0, 00:03:51.704 "rw_mbytes_per_sec": 0, 00:03:51.704 "r_mbytes_per_sec": 0, 00:03:51.704 "w_mbytes_per_sec": 0 00:03:51.704 }, 00:03:51.704 "claimed": true, 00:03:51.704 "claim_type": "exclusive_write", 00:03:51.704 "zoned": false, 00:03:51.704 "supported_io_types": { 00:03:51.704 "read": true, 00:03:51.704 "write": true, 00:03:51.704 "unmap": true, 00:03:51.704 "flush": true, 00:03:51.704 "reset": true, 00:03:51.704 "nvme_admin": false, 00:03:51.704 "nvme_io": false, 00:03:51.704 "nvme_io_md": false, 00:03:51.704 "write_zeroes": true, 00:03:51.704 "zcopy": true, 00:03:51.704 "get_zone_info": false, 00:03:51.704 "zone_management": false, 00:03:51.704 "zone_append": false, 00:03:51.704 "compare": false, 00:03:51.704 "compare_and_write": false, 00:03:51.704 "abort": true, 00:03:51.704 "seek_hole": false, 00:03:51.704 "seek_data": false, 00:03:51.704 "copy": true, 00:03:51.704 "nvme_iov_md": false 00:03:51.704 }, 00:03:51.704 "memory_domains": [ 00:03:51.704 { 00:03:51.704 "dma_device_id": "system", 00:03:51.704 "dma_device_type": 1 00:03:51.704 }, 00:03:51.704 { 00:03:51.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.704 "dma_device_type": 2 00:03:51.704 } 00:03:51.704 ], 00:03:51.704 "driver_specific": {} 00:03:51.704 }, 00:03:51.704 { 00:03:51.704 "name": "Passthru0", 00:03:51.704 "aliases": [ 00:03:51.704 "0a6b64b8-00b1-5b13-940d-af1c36523cda" 00:03:51.704 ], 00:03:51.704 "product_name": "passthru", 00:03:51.704 "block_size": 512, 00:03:51.704 "num_blocks": 16384, 00:03:51.704 "uuid": "0a6b64b8-00b1-5b13-940d-af1c36523cda", 00:03:51.704 "assigned_rate_limits": { 00:03:51.704 "rw_ios_per_sec": 0, 00:03:51.704 "rw_mbytes_per_sec": 0, 00:03:51.704 "r_mbytes_per_sec": 0, 00:03:51.704 "w_mbytes_per_sec": 0 00:03:51.704 }, 00:03:51.704 "claimed": false, 00:03:51.704 "zoned": false, 00:03:51.704 "supported_io_types": { 00:03:51.704 "read": true, 00:03:51.704 "write": true, 00:03:51.704 "unmap": true, 00:03:51.704 "flush": true, 00:03:51.704 "reset": true, 00:03:51.704 "nvme_admin": false, 00:03:51.704 "nvme_io": false, 00:03:51.704 "nvme_io_md": false, 00:03:51.704 "write_zeroes": true, 00:03:51.704 "zcopy": true, 00:03:51.704 "get_zone_info": false, 00:03:51.704 "zone_management": false, 00:03:51.704 "zone_append": false, 00:03:51.704 "compare": false, 00:03:51.704 "compare_and_write": false, 00:03:51.704 "abort": true, 00:03:51.704 "seek_hole": false, 00:03:51.704 "seek_data": false, 00:03:51.704 "copy": true, 00:03:51.704 "nvme_iov_md": false 00:03:51.704 }, 00:03:51.704 "memory_domains": [ 00:03:51.704 { 00:03:51.704 "dma_device_id": "system", 00:03:51.704 "dma_device_type": 1 00:03:51.704 }, 00:03:51.704 { 00:03:51.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.704 "dma_device_type": 2 00:03:51.704 } 00:03:51.704 ], 00:03:51.704 "driver_specific": { 00:03:51.704 "passthru": { 00:03:51.704 "name": "Passthru0", 00:03:51.704 "base_bdev_name": "Malloc0" 00:03:51.704 } 00:03:51.704 } 00:03:51.704 } 00:03:51.704 ]' 00:03:51.704 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.704 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.704 13:28:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.704 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.704 13:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.704 13:28:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.704 13:28:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:51.704 13:28:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.704 13:28:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.704 13:28:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.704 13:28:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.704 13:28:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.704 13:28:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.704 13:28:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.704 13:28:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.704 13:28:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.704 13:28:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.704 00:03:51.704 real 0m0.298s 00:03:51.704 user 0m0.197s 00:03:51.704 sys 0m0.034s 00:03:51.704 13:28:15 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.704 13:28:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.704 ************************************ 00:03:51.704 END TEST rpc_integrity 00:03:51.704 ************************************ 00:03:51.964 13:28:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:51.964 13:28:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.964 13:28:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.964 13:28:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.964 ************************************ 00:03:51.964 START TEST rpc_plugins 00:03:51.964 ************************************ 00:03:51.964 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:51.964 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:51.964 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.964 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.964 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.964 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:51.964 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:51.964 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.964 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.964 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.964 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:51.964 { 00:03:51.964 "name": "Malloc1", 00:03:51.964 "aliases": [ 00:03:51.964 "cfaecc91-9257-4f2f-848c-0f30ac64f915" 00:03:51.964 ], 00:03:51.964 "product_name": "Malloc disk", 00:03:51.964 "block_size": 4096, 00:03:51.964 "num_blocks": 256, 00:03:51.964 "uuid": "cfaecc91-9257-4f2f-848c-0f30ac64f915", 00:03:51.964 "assigned_rate_limits": { 00:03:51.964 "rw_ios_per_sec": 0, 00:03:51.964 "rw_mbytes_per_sec": 0, 00:03:51.964 "r_mbytes_per_sec": 0, 00:03:51.964 "w_mbytes_per_sec": 0 00:03:51.964 }, 00:03:51.964 "claimed": false, 00:03:51.964 "zoned": false, 00:03:51.964 "supported_io_types": { 00:03:51.964 "read": true, 00:03:51.964 "write": true, 00:03:51.964 "unmap": true, 00:03:51.964 "flush": true, 00:03:51.964 "reset": true, 00:03:51.964 "nvme_admin": false, 00:03:51.964 "nvme_io": false, 00:03:51.964 "nvme_io_md": false, 00:03:51.964 "write_zeroes": true, 00:03:51.964 "zcopy": true, 00:03:51.964 "get_zone_info": false, 00:03:51.964 "zone_management": false, 00:03:51.964 "zone_append": false, 00:03:51.964 "compare": false, 00:03:51.964 "compare_and_write": false, 00:03:51.964 "abort": true, 00:03:51.964 "seek_hole": false, 00:03:51.964 "seek_data": false, 00:03:51.964 "copy": true, 00:03:51.964 "nvme_iov_md": false 00:03:51.964 }, 00:03:51.964 "memory_domains": [ 00:03:51.965 { 00:03:51.965 "dma_device_id": "system", 00:03:51.965 "dma_device_type": 1 00:03:51.965 }, 00:03:51.965 { 00:03:51.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.965 "dma_device_type": 2 00:03:51.965 } 00:03:51.965 ], 00:03:51.965 "driver_specific": {} 00:03:51.965 } 00:03:51.965 ]' 00:03:51.965 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:51.965 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:51.965 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:51.965 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.965 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.965 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.965 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:51.965 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.965 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.965 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.965 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:51.965 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:51.965 13:28:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:51.965 00:03:51.965 real 0m0.151s 00:03:51.965 user 0m0.091s 00:03:51.965 sys 0m0.022s 00:03:51.965 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.965 13:28:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.965 ************************************ 00:03:51.965 END TEST rpc_plugins 00:03:51.965 ************************************ 00:03:52.225 13:28:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:52.225 13:28:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:52.225 13:28:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:52.225 13:28:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.225 ************************************ 00:03:52.225 START TEST rpc_trace_cmd_test 00:03:52.225 ************************************ 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:52.225 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid398339", 00:03:52.225 "tpoint_group_mask": "0x8", 00:03:52.225 "iscsi_conn": { 00:03:52.225 "mask": "0x2", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "scsi": { 00:03:52.225 "mask": "0x4", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "bdev": { 00:03:52.225 "mask": "0x8", 00:03:52.225 "tpoint_mask": "0xffffffffffffffff" 00:03:52.225 }, 00:03:52.225 "nvmf_rdma": { 00:03:52.225 "mask": "0x10", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "nvmf_tcp": { 00:03:52.225 "mask": "0x20", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "ftl": { 00:03:52.225 "mask": "0x40", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "blobfs": { 00:03:52.225 "mask": "0x80", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "dsa": { 00:03:52.225 "mask": "0x200", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "thread": { 00:03:52.225 "mask": "0x400", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "nvme_pcie": { 00:03:52.225 "mask": "0x800", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "iaa": { 00:03:52.225 "mask": "0x1000", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "nvme_tcp": { 00:03:52.225 "mask": "0x2000", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "bdev_nvme": { 00:03:52.225 "mask": "0x4000", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "sock": { 00:03:52.225 "mask": "0x8000", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "blob": { 00:03:52.225 "mask": "0x10000", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "bdev_raid": { 00:03:52.225 "mask": "0x20000", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 }, 00:03:52.225 "scheduler": { 00:03:52.225 "mask": "0x40000", 00:03:52.225 "tpoint_mask": "0x0" 00:03:52.225 } 00:03:52.225 }' 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:52.225 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:52.486 13:28:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:52.486 00:03:52.486 real 0m0.233s 00:03:52.486 user 0m0.194s 00:03:52.486 sys 0m0.031s 00:03:52.486 13:28:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:52.486 13:28:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:52.486 ************************************ 00:03:52.486 END TEST rpc_trace_cmd_test 00:03:52.486 ************************************ 00:03:52.486 13:28:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:52.486 13:28:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:52.486 13:28:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:52.486 13:28:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:52.486 13:28:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:52.486 13:28:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.486 ************************************ 00:03:52.486 START TEST rpc_daemon_integrity 00:03:52.486 ************************************ 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.486 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:52.486 { 00:03:52.486 "name": "Malloc2", 00:03:52.486 "aliases": [ 00:03:52.486 "ab93385e-97f7-41df-85bd-2d8121a841d6" 00:03:52.486 ], 00:03:52.486 "product_name": "Malloc disk", 00:03:52.486 "block_size": 512, 00:03:52.486 "num_blocks": 16384, 00:03:52.486 "uuid": "ab93385e-97f7-41df-85bd-2d8121a841d6", 00:03:52.486 "assigned_rate_limits": { 00:03:52.486 "rw_ios_per_sec": 0, 00:03:52.486 "rw_mbytes_per_sec": 0, 00:03:52.487 "r_mbytes_per_sec": 0, 00:03:52.487 "w_mbytes_per_sec": 0 00:03:52.487 }, 00:03:52.487 "claimed": false, 00:03:52.487 "zoned": false, 00:03:52.487 "supported_io_types": { 00:03:52.487 "read": true, 00:03:52.487 "write": true, 00:03:52.487 "unmap": true, 00:03:52.487 "flush": true, 00:03:52.487 "reset": true, 00:03:52.487 "nvme_admin": false, 00:03:52.487 "nvme_io": false, 00:03:52.487 "nvme_io_md": false, 00:03:52.487 "write_zeroes": true, 00:03:52.487 "zcopy": true, 00:03:52.487 "get_zone_info": false, 00:03:52.487 "zone_management": false, 00:03:52.487 "zone_append": false, 00:03:52.487 "compare": false, 00:03:52.487 "compare_and_write": false, 00:03:52.487 "abort": true, 00:03:52.487 "seek_hole": false, 00:03:52.487 "seek_data": false, 00:03:52.487 "copy": true, 00:03:52.487 "nvme_iov_md": false 00:03:52.487 }, 00:03:52.487 "memory_domains": [ 00:03:52.487 { 00:03:52.487 "dma_device_id": "system", 00:03:52.487 "dma_device_type": 1 00:03:52.487 }, 00:03:52.487 { 00:03:52.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.487 "dma_device_type": 2 00:03:52.487 } 00:03:52.487 ], 00:03:52.487 "driver_specific": {} 00:03:52.487 } 00:03:52.487 ]' 00:03:52.487 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:52.487 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:52.487 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:52.487 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.487 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.487 [2024-11-06 13:28:15.832480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:52.487 [2024-11-06 13:28:15.832510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:52.487 [2024-11-06 13:28:15.832522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x109a090 00:03:52.487 [2024-11-06 13:28:15.832529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:52.487 [2024-11-06 13:28:15.833849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:52.487 [2024-11-06 13:28:15.833869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:52.487 Passthru0 00:03:52.487 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.487 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:52.487 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.487 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:52.748 { 00:03:52.748 "name": "Malloc2", 00:03:52.748 "aliases": [ 00:03:52.748 "ab93385e-97f7-41df-85bd-2d8121a841d6" 00:03:52.748 ], 00:03:52.748 "product_name": "Malloc disk", 00:03:52.748 "block_size": 512, 00:03:52.748 "num_blocks": 16384, 00:03:52.748 "uuid": "ab93385e-97f7-41df-85bd-2d8121a841d6", 00:03:52.748 "assigned_rate_limits": { 00:03:52.748 "rw_ios_per_sec": 0, 00:03:52.748 "rw_mbytes_per_sec": 0, 00:03:52.748 "r_mbytes_per_sec": 0, 00:03:52.748 "w_mbytes_per_sec": 0 00:03:52.748 }, 00:03:52.748 "claimed": true, 00:03:52.748 "claim_type": "exclusive_write", 00:03:52.748 "zoned": false, 00:03:52.748 "supported_io_types": { 00:03:52.748 "read": true, 00:03:52.748 "write": true, 00:03:52.748 "unmap": true, 00:03:52.748 "flush": true, 00:03:52.748 "reset": true, 00:03:52.748 "nvme_admin": false, 00:03:52.748 "nvme_io": false, 00:03:52.748 "nvme_io_md": false, 00:03:52.748 "write_zeroes": true, 00:03:52.748 "zcopy": true, 00:03:52.748 "get_zone_info": false, 00:03:52.748 "zone_management": false, 00:03:52.748 "zone_append": false, 00:03:52.748 "compare": false, 00:03:52.748 "compare_and_write": false, 00:03:52.748 "abort": true, 00:03:52.748 "seek_hole": false, 00:03:52.748 "seek_data": false, 00:03:52.748 "copy": true, 00:03:52.748 "nvme_iov_md": false 00:03:52.748 }, 00:03:52.748 "memory_domains": [ 00:03:52.748 { 00:03:52.748 "dma_device_id": "system", 00:03:52.748 "dma_device_type": 1 00:03:52.748 }, 00:03:52.748 { 00:03:52.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.748 "dma_device_type": 2 00:03:52.748 } 00:03:52.748 ], 00:03:52.748 "driver_specific": {} 00:03:52.748 }, 00:03:52.748 { 00:03:52.748 "name": "Passthru0", 00:03:52.748 "aliases": [ 00:03:52.748 "df2f4d28-2a5c-580e-8fe4-b53d86f7e022" 00:03:52.748 ], 00:03:52.748 "product_name": "passthru", 00:03:52.748 "block_size": 512, 00:03:52.748 "num_blocks": 16384, 00:03:52.748 "uuid": "df2f4d28-2a5c-580e-8fe4-b53d86f7e022", 00:03:52.748 "assigned_rate_limits": { 00:03:52.748 "rw_ios_per_sec": 0, 00:03:52.748 "rw_mbytes_per_sec": 0, 00:03:52.748 "r_mbytes_per_sec": 0, 00:03:52.748 "w_mbytes_per_sec": 0 00:03:52.748 }, 00:03:52.748 "claimed": false, 00:03:52.748 "zoned": false, 00:03:52.748 "supported_io_types": { 00:03:52.748 "read": true, 00:03:52.748 "write": true, 00:03:52.748 "unmap": true, 00:03:52.748 "flush": true, 00:03:52.748 "reset": true, 00:03:52.748 "nvme_admin": false, 00:03:52.748 "nvme_io": false, 00:03:52.748 "nvme_io_md": false, 00:03:52.748 "write_zeroes": true, 00:03:52.748 "zcopy": true, 00:03:52.748 "get_zone_info": false, 00:03:52.748 "zone_management": false, 00:03:52.748 "zone_append": false, 00:03:52.748 "compare": false, 00:03:52.748 "compare_and_write": false, 00:03:52.748 "abort": true, 00:03:52.748 "seek_hole": false, 00:03:52.748 "seek_data": false, 00:03:52.748 "copy": true, 00:03:52.748 "nvme_iov_md": false 00:03:52.748 }, 00:03:52.748 "memory_domains": [ 00:03:52.748 { 00:03:52.748 "dma_device_id": "system", 00:03:52.748 "dma_device_type": 1 00:03:52.748 }, 00:03:52.748 { 00:03:52.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.748 "dma_device_type": 2 00:03:52.748 } 00:03:52.748 ], 00:03:52.748 "driver_specific": { 00:03:52.748 "passthru": { 00:03:52.748 "name": "Passthru0", 00:03:52.748 "base_bdev_name": "Malloc2" 00:03:52.748 } 00:03:52.748 } 00:03:52.748 } 00:03:52.748 ]' 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:52.748 00:03:52.748 real 0m0.298s 00:03:52.748 user 0m0.189s 00:03:52.748 sys 0m0.037s 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:52.748 13:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.748 ************************************ 00:03:52.748 END TEST rpc_daemon_integrity 00:03:52.748 ************************************ 00:03:52.748 13:28:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:52.748 13:28:16 rpc -- rpc/rpc.sh@84 -- # killprocess 398339 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@952 -- # '[' -z 398339 ']' 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@956 -- # kill -0 398339 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@957 -- # uname 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 398339 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 398339' 00:03:52.748 killing process with pid 398339 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@971 -- # kill 398339 00:03:52.748 13:28:16 rpc -- common/autotest_common.sh@976 -- # wait 398339 00:03:53.009 00:03:53.009 real 0m2.603s 00:03:53.009 user 0m3.406s 00:03:53.009 sys 0m0.716s 00:03:53.009 13:28:16 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:53.009 13:28:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.009 ************************************ 00:03:53.009 END TEST rpc 00:03:53.009 ************************************ 00:03:53.009 13:28:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:53.009 13:28:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:53.009 13:28:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:53.009 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:03:53.009 ************************************ 00:03:53.009 START TEST skip_rpc 00:03:53.009 ************************************ 00:03:53.009 13:28:16 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:53.270 * Looking for test storage... 00:03:53.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:53.270 13:28:16 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:53.270 13:28:16 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:53.270 13:28:16 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:53.270 13:28:16 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.270 13:28:16 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:53.270 13:28:16 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.270 13:28:16 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:53.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.270 --rc genhtml_branch_coverage=1 00:03:53.270 --rc genhtml_function_coverage=1 00:03:53.270 --rc genhtml_legend=1 00:03:53.270 --rc geninfo_all_blocks=1 00:03:53.270 --rc geninfo_unexecuted_blocks=1 00:03:53.270 00:03:53.270 ' 00:03:53.270 13:28:16 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:53.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.270 --rc genhtml_branch_coverage=1 00:03:53.270 --rc genhtml_function_coverage=1 00:03:53.270 --rc genhtml_legend=1 00:03:53.270 --rc geninfo_all_blocks=1 00:03:53.270 --rc geninfo_unexecuted_blocks=1 00:03:53.270 00:03:53.270 ' 00:03:53.270 13:28:16 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:53.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.270 --rc genhtml_branch_coverage=1 00:03:53.270 --rc genhtml_function_coverage=1 00:03:53.270 --rc genhtml_legend=1 00:03:53.270 --rc geninfo_all_blocks=1 00:03:53.270 --rc geninfo_unexecuted_blocks=1 00:03:53.270 00:03:53.270 ' 00:03:53.270 13:28:16 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:53.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.270 --rc genhtml_branch_coverage=1 00:03:53.270 --rc genhtml_function_coverage=1 00:03:53.270 --rc genhtml_legend=1 00:03:53.270 --rc geninfo_all_blocks=1 00:03:53.270 --rc geninfo_unexecuted_blocks=1 00:03:53.271 00:03:53.271 ' 00:03:53.271 13:28:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:53.271 13:28:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:53.271 13:28:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:53.271 13:28:16 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:53.271 13:28:16 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:53.271 13:28:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.271 ************************************ 00:03:53.271 START TEST skip_rpc 00:03:53.271 ************************************ 00:03:53.271 13:28:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:53.271 13:28:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=399193 00:03:53.271 13:28:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.271 13:28:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:53.271 13:28:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:53.531 [2024-11-06 13:28:16.672778] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:03:53.531 [2024-11-06 13:28:16.672847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399193 ] 00:03:53.531 [2024-11-06 13:28:16.747474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.531 [2024-11-06 13:28:16.789224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 399193 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 399193 ']' 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 399193 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 399193 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 399193' 00:03:58.817 killing process with pid 399193 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 399193 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 399193 00:03:58.817 00:03:58.817 real 0m5.285s 00:03:58.817 user 0m5.103s 00:03:58.817 sys 0m0.231s 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:58.817 13:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.817 ************************************ 00:03:58.817 END TEST skip_rpc 00:03:58.817 ************************************ 00:03:58.817 13:28:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:58.817 13:28:21 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:58.817 13:28:21 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:58.817 13:28:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.817 ************************************ 00:03:58.817 START TEST skip_rpc_with_json 00:03:58.817 ************************************ 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=400229 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 400229 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 400229 ']' 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:58.817 13:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.817 [2024-11-06 13:28:22.026818] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:03:58.817 [2024-11-06 13:28:22.026866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400229 ] 00:03:58.817 [2024-11-06 13:28:22.096959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.817 [2024-11-06 13:28:22.133081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.760 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:59.760 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:59.760 13:28:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:59.760 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.760 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.760 [2024-11-06 13:28:22.808314] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:59.760 request: 00:03:59.760 { 00:03:59.760 "trtype": "tcp", 00:03:59.760 "method": "nvmf_get_transports", 00:03:59.760 "req_id": 1 00:03:59.760 } 00:03:59.760 Got JSON-RPC error response 00:03:59.760 response: 00:03:59.760 { 00:03:59.760 "code": -19, 00:03:59.760 "message": "No such device" 00:03:59.760 } 00:03:59.760 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:59.760 13:28:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:59.761 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.761 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.761 [2024-11-06 13:28:22.820434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.761 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.761 13:28:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:59.761 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.761 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.761 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.761 13:28:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:59.761 { 00:03:59.761 "subsystems": [ 00:03:59.761 { 00:03:59.761 "subsystem": "fsdev", 00:03:59.761 "config": [ 00:03:59.761 { 00:03:59.761 "method": "fsdev_set_opts", 00:03:59.761 "params": { 00:03:59.761 "fsdev_io_pool_size": 65535, 00:03:59.761 "fsdev_io_cache_size": 256 00:03:59.761 } 00:03:59.761 } 00:03:59.761 ] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "vfio_user_target", 00:03:59.761 "config": null 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "keyring", 00:03:59.761 "config": [] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "iobuf", 00:03:59.761 "config": [ 00:03:59.761 { 00:03:59.761 "method": "iobuf_set_options", 00:03:59.761 "params": { 00:03:59.761 "small_pool_count": 8192, 00:03:59.761 "large_pool_count": 1024, 00:03:59.761 "small_bufsize": 8192, 00:03:59.761 "large_bufsize": 135168, 00:03:59.761 "enable_numa": false 00:03:59.761 } 00:03:59.761 } 00:03:59.761 ] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "sock", 00:03:59.761 "config": [ 00:03:59.761 { 00:03:59.761 "method": "sock_set_default_impl", 00:03:59.761 "params": { 00:03:59.761 "impl_name": "posix" 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "sock_impl_set_options", 00:03:59.761 "params": { 00:03:59.761 "impl_name": "ssl", 00:03:59.761 "recv_buf_size": 4096, 00:03:59.761 "send_buf_size": 4096, 00:03:59.761 "enable_recv_pipe": true, 00:03:59.761 "enable_quickack": false, 00:03:59.761 "enable_placement_id": 0, 00:03:59.761 "enable_zerocopy_send_server": true, 00:03:59.761 "enable_zerocopy_send_client": false, 00:03:59.761 "zerocopy_threshold": 0, 00:03:59.761 "tls_version": 0, 00:03:59.761 "enable_ktls": false 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "sock_impl_set_options", 00:03:59.761 "params": { 00:03:59.761 "impl_name": "posix", 00:03:59.761 "recv_buf_size": 2097152, 00:03:59.761 "send_buf_size": 2097152, 00:03:59.761 "enable_recv_pipe": true, 00:03:59.761 "enable_quickack": false, 00:03:59.761 "enable_placement_id": 0, 00:03:59.761 "enable_zerocopy_send_server": true, 00:03:59.761 "enable_zerocopy_send_client": false, 00:03:59.761 "zerocopy_threshold": 0, 00:03:59.761 "tls_version": 0, 00:03:59.761 "enable_ktls": false 00:03:59.761 } 00:03:59.761 } 00:03:59.761 ] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "vmd", 00:03:59.761 "config": [] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "accel", 00:03:59.761 "config": [ 00:03:59.761 { 00:03:59.761 "method": "accel_set_options", 00:03:59.761 "params": { 00:03:59.761 "small_cache_size": 128, 00:03:59.761 "large_cache_size": 16, 00:03:59.761 "task_count": 2048, 00:03:59.761 "sequence_count": 2048, 00:03:59.761 "buf_count": 2048 00:03:59.761 } 00:03:59.761 } 00:03:59.761 ] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "bdev", 00:03:59.761 "config": [ 00:03:59.761 { 00:03:59.761 "method": "bdev_set_options", 00:03:59.761 "params": { 00:03:59.761 "bdev_io_pool_size": 65535, 00:03:59.761 "bdev_io_cache_size": 256, 00:03:59.761 "bdev_auto_examine": true, 00:03:59.761 "iobuf_small_cache_size": 128, 00:03:59.761 "iobuf_large_cache_size": 16 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "bdev_raid_set_options", 00:03:59.761 "params": { 00:03:59.761 "process_window_size_kb": 1024, 00:03:59.761 "process_max_bandwidth_mb_sec": 0 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "bdev_iscsi_set_options", 00:03:59.761 "params": { 00:03:59.761 "timeout_sec": 30 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "bdev_nvme_set_options", 00:03:59.761 "params": { 00:03:59.761 "action_on_timeout": "none", 00:03:59.761 "timeout_us": 0, 00:03:59.761 "timeout_admin_us": 0, 00:03:59.761 "keep_alive_timeout_ms": 10000, 00:03:59.761 "arbitration_burst": 0, 00:03:59.761 "low_priority_weight": 0, 00:03:59.761 "medium_priority_weight": 0, 00:03:59.761 "high_priority_weight": 0, 00:03:59.761 "nvme_adminq_poll_period_us": 10000, 00:03:59.761 "nvme_ioq_poll_period_us": 0, 00:03:59.761 "io_queue_requests": 0, 00:03:59.761 "delay_cmd_submit": true, 00:03:59.761 "transport_retry_count": 4, 00:03:59.761 "bdev_retry_count": 3, 00:03:59.761 "transport_ack_timeout": 0, 00:03:59.761 "ctrlr_loss_timeout_sec": 0, 00:03:59.761 "reconnect_delay_sec": 0, 00:03:59.761 "fast_io_fail_timeout_sec": 0, 00:03:59.761 "disable_auto_failback": false, 00:03:59.761 "generate_uuids": false, 00:03:59.761 "transport_tos": 0, 00:03:59.761 "nvme_error_stat": false, 00:03:59.761 "rdma_srq_size": 0, 00:03:59.761 "io_path_stat": false, 00:03:59.761 "allow_accel_sequence": false, 00:03:59.761 "rdma_max_cq_size": 0, 00:03:59.761 "rdma_cm_event_timeout_ms": 0, 00:03:59.761 "dhchap_digests": [ 00:03:59.761 "sha256", 00:03:59.761 "sha384", 00:03:59.761 "sha512" 00:03:59.761 ], 00:03:59.761 "dhchap_dhgroups": [ 00:03:59.761 "null", 00:03:59.761 "ffdhe2048", 00:03:59.761 "ffdhe3072", 00:03:59.761 "ffdhe4096", 00:03:59.761 "ffdhe6144", 00:03:59.761 "ffdhe8192" 00:03:59.761 ] 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "bdev_nvme_set_hotplug", 00:03:59.761 "params": { 00:03:59.761 "period_us": 100000, 00:03:59.761 "enable": false 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "bdev_wait_for_examine" 00:03:59.761 } 00:03:59.761 ] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "scsi", 00:03:59.761 "config": null 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "scheduler", 00:03:59.761 "config": [ 00:03:59.761 { 00:03:59.761 "method": "framework_set_scheduler", 00:03:59.761 "params": { 00:03:59.761 "name": "static" 00:03:59.761 } 00:03:59.761 } 00:03:59.761 ] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "vhost_scsi", 00:03:59.761 "config": [] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "vhost_blk", 00:03:59.761 "config": [] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "ublk", 00:03:59.761 "config": [] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "nbd", 00:03:59.761 "config": [] 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "subsystem": "nvmf", 00:03:59.761 "config": [ 00:03:59.761 { 00:03:59.761 "method": "nvmf_set_config", 00:03:59.761 "params": { 00:03:59.761 "discovery_filter": "match_any", 00:03:59.761 "admin_cmd_passthru": { 00:03:59.761 "identify_ctrlr": false 00:03:59.761 }, 00:03:59.761 "dhchap_digests": [ 00:03:59.761 "sha256", 00:03:59.761 "sha384", 00:03:59.761 "sha512" 00:03:59.761 ], 00:03:59.761 "dhchap_dhgroups": [ 00:03:59.761 "null", 00:03:59.761 "ffdhe2048", 00:03:59.761 "ffdhe3072", 00:03:59.761 "ffdhe4096", 00:03:59.761 "ffdhe6144", 00:03:59.761 "ffdhe8192" 00:03:59.761 ] 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "nvmf_set_max_subsystems", 00:03:59.761 "params": { 00:03:59.761 "max_subsystems": 1024 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "nvmf_set_crdt", 00:03:59.761 "params": { 00:03:59.761 "crdt1": 0, 00:03:59.761 "crdt2": 0, 00:03:59.761 "crdt3": 0 00:03:59.761 } 00:03:59.761 }, 00:03:59.761 { 00:03:59.761 "method": "nvmf_create_transport", 00:03:59.761 "params": { 00:03:59.761 "trtype": "TCP", 00:03:59.761 "max_queue_depth": 128, 00:03:59.761 "max_io_qpairs_per_ctrlr": 127, 00:03:59.761 "in_capsule_data_size": 4096, 00:03:59.761 "max_io_size": 131072, 00:03:59.761 "io_unit_size": 131072, 00:03:59.761 "max_aq_depth": 128, 00:03:59.761 "num_shared_buffers": 511, 00:03:59.761 "buf_cache_size": 4294967295, 00:03:59.761 "dif_insert_or_strip": false, 00:03:59.761 "zcopy": false, 00:03:59.762 "c2h_success": true, 00:03:59.762 "sock_priority": 0, 00:03:59.762 "abort_timeout_sec": 1, 00:03:59.762 "ack_timeout": 0, 00:03:59.762 "data_wr_pool_size": 0 00:03:59.762 } 00:03:59.762 } 00:03:59.762 ] 00:03:59.762 }, 00:03:59.762 { 00:03:59.762 "subsystem": "iscsi", 00:03:59.762 "config": [ 00:03:59.762 { 00:03:59.762 "method": "iscsi_set_options", 00:03:59.762 "params": { 00:03:59.762 "node_base": "iqn.2016-06.io.spdk", 00:03:59.762 "max_sessions": 128, 00:03:59.762 "max_connections_per_session": 2, 00:03:59.762 "max_queue_depth": 64, 00:03:59.762 "default_time2wait": 2, 00:03:59.762 "default_time2retain": 20, 00:03:59.762 "first_burst_length": 8192, 00:03:59.762 "immediate_data": true, 00:03:59.762 "allow_duplicated_isid": false, 00:03:59.762 "error_recovery_level": 0, 00:03:59.762 "nop_timeout": 60, 00:03:59.762 "nop_in_interval": 30, 00:03:59.762 "disable_chap": false, 00:03:59.762 "require_chap": false, 00:03:59.762 "mutual_chap": false, 00:03:59.762 "chap_group": 0, 00:03:59.762 "max_large_datain_per_connection": 64, 00:03:59.762 "max_r2t_per_connection": 4, 00:03:59.762 "pdu_pool_size": 36864, 00:03:59.762 "immediate_data_pool_size": 16384, 00:03:59.762 "data_out_pool_size": 2048 00:03:59.762 } 00:03:59.762 } 00:03:59.762 ] 00:03:59.762 } 00:03:59.762 ] 00:03:59.762 } 00:03:59.762 13:28:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:59.762 13:28:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 400229 00:03:59.762 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 400229 ']' 00:03:59.762 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 400229 00:03:59.762 13:28:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:59.762 13:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:59.762 13:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 400229 00:03:59.762 13:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:59.762 13:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:59.762 13:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 400229' 00:03:59.762 killing process with pid 400229 00:03:59.762 13:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 400229 00:03:59.762 13:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 400229 00:04:00.023 13:28:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=400568 00:04:00.023 13:28:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:00.023 13:28:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 400568 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 400568 ']' 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 400568 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 400568 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 400568' 00:04:05.310 killing process with pid 400568 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 400568 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 400568 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:05.310 00:04:05.310 real 0m6.585s 00:04:05.310 user 0m6.488s 00:04:05.310 sys 0m0.547s 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.310 ************************************ 00:04:05.310 END TEST skip_rpc_with_json 00:04:05.310 ************************************ 00:04:05.310 13:28:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:05.310 13:28:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.310 13:28:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.310 13:28:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.310 ************************************ 00:04:05.310 START TEST skip_rpc_with_delay 00:04:05.310 ************************************ 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:05.310 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:05.571 [2024-11-06 13:28:28.692548] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:05.571 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:05.571 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:05.571 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:05.571 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:05.571 00:04:05.571 real 0m0.076s 00:04:05.571 user 0m0.042s 00:04:05.571 sys 0m0.033s 00:04:05.571 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.571 13:28:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:05.571 ************************************ 00:04:05.571 END TEST skip_rpc_with_delay 00:04:05.571 ************************************ 00:04:05.571 13:28:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:05.571 13:28:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:05.571 13:28:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:05.571 13:28:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.571 13:28:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.571 13:28:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.571 ************************************ 00:04:05.571 START TEST exit_on_failed_rpc_init 00:04:05.571 ************************************ 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=401641 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 401641 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 401641 ']' 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:05.571 13:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.571 [2024-11-06 13:28:28.844716] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:05.571 [2024-11-06 13:28:28.844774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401641 ] 00:04:05.571 [2024-11-06 13:28:28.915911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.832 [2024-11-06 13:28:28.950788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:06.403 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.403 [2024-11-06 13:28:29.703677] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:06.403 [2024-11-06 13:28:29.703728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401895 ] 00:04:06.664 [2024-11-06 13:28:29.794601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.664 [2024-11-06 13:28:29.846177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.664 [2024-11-06 13:28:29.846248] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:06.664 [2024-11-06 13:28:29.846261] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:06.664 [2024-11-06 13:28:29.846272] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 401641 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 401641 ']' 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 401641 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 401641 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 401641' 00:04:06.664 killing process with pid 401641 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 401641 00:04:06.664 13:28:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 401641 00:04:06.924 00:04:06.924 real 0m1.395s 00:04:06.924 user 0m1.670s 00:04:06.924 sys 0m0.384s 00:04:06.924 13:28:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:06.924 13:28:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.924 ************************************ 00:04:06.924 END TEST exit_on_failed_rpc_init 00:04:06.924 ************************************ 00:04:06.924 13:28:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.924 00:04:06.924 real 0m13.855s 00:04:06.924 user 0m13.559s 00:04:06.924 sys 0m1.484s 00:04:06.924 13:28:30 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:06.924 13:28:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.924 ************************************ 00:04:06.924 END TEST skip_rpc 00:04:06.924 ************************************ 00:04:06.924 13:28:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:06.924 13:28:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:06.924 13:28:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:06.925 13:28:30 -- common/autotest_common.sh@10 -- # set +x 00:04:07.185 ************************************ 00:04:07.185 START TEST rpc_client 00:04:07.185 ************************************ 00:04:07.185 13:28:30 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:07.185 * Looking for test storage... 00:04:07.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:07.185 13:28:30 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.185 13:28:30 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.185 13:28:30 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.185 13:28:30 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:07.185 13:28:30 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.186 13:28:30 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:07.186 13:28:30 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.186 13:28:30 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.186 --rc genhtml_branch_coverage=1 00:04:07.186 --rc genhtml_function_coverage=1 00:04:07.186 --rc genhtml_legend=1 00:04:07.186 --rc geninfo_all_blocks=1 00:04:07.186 --rc geninfo_unexecuted_blocks=1 00:04:07.186 00:04:07.186 ' 00:04:07.186 13:28:30 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.186 --rc genhtml_branch_coverage=1 00:04:07.186 --rc genhtml_function_coverage=1 00:04:07.186 --rc genhtml_legend=1 00:04:07.186 --rc geninfo_all_blocks=1 00:04:07.186 --rc geninfo_unexecuted_blocks=1 00:04:07.186 00:04:07.186 ' 00:04:07.186 13:28:30 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.186 --rc genhtml_branch_coverage=1 00:04:07.186 --rc genhtml_function_coverage=1 00:04:07.186 --rc genhtml_legend=1 00:04:07.186 --rc geninfo_all_blocks=1 00:04:07.186 --rc geninfo_unexecuted_blocks=1 00:04:07.186 00:04:07.186 ' 00:04:07.186 13:28:30 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.186 --rc genhtml_branch_coverage=1 00:04:07.186 --rc genhtml_function_coverage=1 00:04:07.186 --rc genhtml_legend=1 00:04:07.186 --rc geninfo_all_blocks=1 00:04:07.186 --rc geninfo_unexecuted_blocks=1 00:04:07.186 00:04:07.186 ' 00:04:07.186 13:28:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:07.186 OK 00:04:07.186 13:28:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:07.186 00:04:07.186 real 0m0.222s 00:04:07.186 user 0m0.132s 00:04:07.186 sys 0m0.099s 00:04:07.186 13:28:30 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.186 13:28:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:07.186 ************************************ 00:04:07.186 END TEST rpc_client 00:04:07.186 ************************************ 00:04:07.447 13:28:30 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:07.447 13:28:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.447 13:28:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.447 13:28:30 -- common/autotest_common.sh@10 -- # set +x 00:04:07.447 ************************************ 00:04:07.447 START TEST json_config 00:04:07.447 ************************************ 00:04:07.447 13:28:30 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:07.447 13:28:30 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.447 13:28:30 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.447 13:28:30 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.447 13:28:30 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.447 13:28:30 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.447 13:28:30 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.447 13:28:30 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.447 13:28:30 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.447 13:28:30 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.447 13:28:30 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.447 13:28:30 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.447 13:28:30 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.447 13:28:30 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.447 13:28:30 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.447 13:28:30 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.447 13:28:30 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:07.447 13:28:30 json_config -- scripts/common.sh@345 -- # : 1 00:04:07.447 13:28:30 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.447 13:28:30 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.447 13:28:30 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:07.447 13:28:30 json_config -- scripts/common.sh@353 -- # local d=1 00:04:07.447 13:28:30 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.447 13:28:30 json_config -- scripts/common.sh@355 -- # echo 1 00:04:07.447 13:28:30 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.447 13:28:30 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:07.447 13:28:30 json_config -- scripts/common.sh@353 -- # local d=2 00:04:07.447 13:28:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.447 13:28:30 json_config -- scripts/common.sh@355 -- # echo 2 00:04:07.447 13:28:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.447 13:28:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.447 13:28:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.447 13:28:30 json_config -- scripts/common.sh@368 -- # return 0 00:04:07.447 13:28:30 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.447 13:28:30 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.447 --rc genhtml_branch_coverage=1 00:04:07.447 --rc genhtml_function_coverage=1 00:04:07.447 --rc genhtml_legend=1 00:04:07.447 --rc geninfo_all_blocks=1 00:04:07.448 --rc geninfo_unexecuted_blocks=1 00:04:07.448 00:04:07.448 ' 00:04:07.448 13:28:30 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.448 --rc genhtml_branch_coverage=1 00:04:07.448 --rc genhtml_function_coverage=1 00:04:07.448 --rc genhtml_legend=1 00:04:07.448 --rc geninfo_all_blocks=1 00:04:07.448 --rc geninfo_unexecuted_blocks=1 00:04:07.448 00:04:07.448 ' 00:04:07.448 13:28:30 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.448 --rc genhtml_branch_coverage=1 00:04:07.448 --rc genhtml_function_coverage=1 00:04:07.448 --rc genhtml_legend=1 00:04:07.448 --rc geninfo_all_blocks=1 00:04:07.448 --rc geninfo_unexecuted_blocks=1 00:04:07.448 00:04:07.448 ' 00:04:07.448 13:28:30 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.448 --rc genhtml_branch_coverage=1 00:04:07.448 --rc genhtml_function_coverage=1 00:04:07.448 --rc genhtml_legend=1 00:04:07.448 --rc geninfo_all_blocks=1 00:04:07.448 --rc geninfo_unexecuted_blocks=1 00:04:07.448 00:04:07.448 ' 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:07.448 13:28:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.448 13:28:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.448 13:28:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.448 13:28:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.448 13:28:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.448 13:28:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.448 13:28:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.448 13:28:30 json_config -- paths/export.sh@5 -- # export PATH 00:04:07.448 13:28:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@51 -- # : 0 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.448 13:28:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:07.448 INFO: JSON configuration test init 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:07.448 13:28:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.448 13:28:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.448 13:28:30 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:07.448 13:28:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.448 13:28:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.709 13:28:30 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:07.709 13:28:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:07.709 13:28:30 json_config -- json_config/common.sh@10 -- # shift 00:04:07.709 13:28:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:07.709 13:28:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:07.709 13:28:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:07.709 13:28:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.709 13:28:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.709 13:28:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=402116 00:04:07.709 13:28:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:07.709 Waiting for target to run... 00:04:07.709 13:28:30 json_config -- json_config/common.sh@25 -- # waitforlisten 402116 /var/tmp/spdk_tgt.sock 00:04:07.710 13:28:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:07.710 13:28:30 json_config -- common/autotest_common.sh@833 -- # '[' -z 402116 ']' 00:04:07.710 13:28:30 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:07.710 13:28:30 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:07.710 13:28:30 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:07.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:07.710 13:28:30 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:07.710 13:28:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.710 [2024-11-06 13:28:30.890114] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:07.710 [2024-11-06 13:28:30.890179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402116 ] 00:04:07.970 [2024-11-06 13:28:31.216650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.970 [2024-11-06 13:28:31.250405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.540 13:28:31 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:08.540 13:28:31 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:08.540 13:28:31 json_config -- json_config/common.sh@26 -- # echo '' 00:04:08.540 00:04:08.540 13:28:31 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:08.540 13:28:31 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:08.540 13:28:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.540 13:28:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.540 13:28:31 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:08.540 13:28:31 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:08.540 13:28:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.540 13:28:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.540 13:28:31 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:08.540 13:28:31 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:08.540 13:28:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:09.110 13:28:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.110 13:28:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:09.110 13:28:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@54 -- # sort 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:09.110 13:28:32 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:09.371 13:28:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.371 13:28:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:09.371 13:28:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.371 13:28:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:09.371 13:28:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:09.371 MallocForNvmf0 00:04:09.371 13:28:32 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.371 13:28:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.632 MallocForNvmf1 00:04:09.632 13:28:32 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.632 13:28:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.892 [2024-11-06 13:28:33.014938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:09.893 13:28:33 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.893 13:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.893 13:28:33 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:09.893 13:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:10.153 13:28:33 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.153 13:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.153 13:28:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.153 13:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.413 [2024-11-06 13:28:33.649047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:10.413 13:28:33 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:10.413 13:28:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.413 13:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.413 13:28:33 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:10.413 13:28:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.413 13:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.413 13:28:33 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:10.413 13:28:33 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:10.413 13:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:10.674 MallocBdevForConfigChangeCheck 00:04:10.674 13:28:33 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:10.674 13:28:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.674 13:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.674 13:28:33 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:10.674 13:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.934 13:28:34 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:10.934 INFO: shutting down applications... 00:04:10.934 13:28:34 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:10.934 13:28:34 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:10.934 13:28:34 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:10.934 13:28:34 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:11.505 Calling clear_iscsi_subsystem 00:04:11.505 Calling clear_nvmf_subsystem 00:04:11.505 Calling clear_nbd_subsystem 00:04:11.505 Calling clear_ublk_subsystem 00:04:11.505 Calling clear_vhost_blk_subsystem 00:04:11.505 Calling clear_vhost_scsi_subsystem 00:04:11.505 Calling clear_bdev_subsystem 00:04:11.505 13:28:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:11.505 13:28:34 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:11.505 13:28:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:11.505 13:28:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.505 13:28:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:11.505 13:28:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:11.765 13:28:35 json_config -- json_config/json_config.sh@352 -- # break 00:04:11.765 13:28:35 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:11.765 13:28:35 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:11.765 13:28:35 json_config -- json_config/common.sh@31 -- # local app=target 00:04:11.765 13:28:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:11.765 13:28:35 json_config -- json_config/common.sh@35 -- # [[ -n 402116 ]] 00:04:11.765 13:28:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 402116 00:04:11.765 13:28:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:11.765 13:28:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.765 13:28:35 json_config -- json_config/common.sh@41 -- # kill -0 402116 00:04:11.765 13:28:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.336 13:28:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.336 13:28:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.337 13:28:35 json_config -- json_config/common.sh@41 -- # kill -0 402116 00:04:12.337 13:28:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:12.337 13:28:35 json_config -- json_config/common.sh@43 -- # break 00:04:12.337 13:28:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:12.337 13:28:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:12.337 SPDK target shutdown done 00:04:12.337 13:28:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:12.337 INFO: relaunching applications... 00:04:12.337 13:28:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.337 13:28:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:12.337 13:28:35 json_config -- json_config/common.sh@10 -- # shift 00:04:12.337 13:28:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.337 13:28:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.337 13:28:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.337 13:28:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.337 13:28:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.337 13:28:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=403246 00:04:12.337 13:28:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.337 Waiting for target to run... 00:04:12.337 13:28:35 json_config -- json_config/common.sh@25 -- # waitforlisten 403246 /var/tmp/spdk_tgt.sock 00:04:12.337 13:28:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.337 13:28:35 json_config -- common/autotest_common.sh@833 -- # '[' -z 403246 ']' 00:04:12.337 13:28:35 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.337 13:28:35 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:12.337 13:28:35 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.337 13:28:35 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:12.337 13:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.337 [2024-11-06 13:28:35.669485] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:12.337 [2024-11-06 13:28:35.669544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403246 ] 00:04:12.597 [2024-11-06 13:28:35.956950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.858 [2024-11-06 13:28:35.986426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.429 [2024-11-06 13:28:36.500763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:13.429 [2024-11-06 13:28:36.533159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:13.429 13:28:36 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:13.429 13:28:36 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:13.429 13:28:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:13.429 00:04:13.429 13:28:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:13.429 13:28:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:13.429 INFO: Checking if target configuration is the same... 00:04:13.429 13:28:36 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.429 13:28:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:13.429 13:28:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:13.429 + '[' 2 -ne 2 ']' 00:04:13.429 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:13.429 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:13.429 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:13.429 +++ basename /dev/fd/62 00:04:13.429 ++ mktemp /tmp/62.XXX 00:04:13.429 + tmp_file_1=/tmp/62.CB6 00:04:13.429 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.429 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:13.429 + tmp_file_2=/tmp/spdk_tgt_config.json.kFn 00:04:13.429 + ret=0 00:04:13.429 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:13.689 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:13.689 + diff -u /tmp/62.CB6 /tmp/spdk_tgt_config.json.kFn 00:04:13.689 + echo 'INFO: JSON config files are the same' 00:04:13.689 INFO: JSON config files are the same 00:04:13.689 + rm /tmp/62.CB6 /tmp/spdk_tgt_config.json.kFn 00:04:13.689 + exit 0 00:04:13.689 13:28:36 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:13.689 13:28:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:13.689 INFO: changing configuration and checking if this can be detected... 00:04:13.689 13:28:36 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:13.689 13:28:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:13.950 13:28:37 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:13.950 13:28:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:13.950 13:28:37 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.950 + '[' 2 -ne 2 ']' 00:04:13.950 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:13.950 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:13.950 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:13.950 +++ basename /dev/fd/62 00:04:13.950 ++ mktemp /tmp/62.XXX 00:04:13.950 + tmp_file_1=/tmp/62.UzF 00:04:13.950 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.950 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:13.950 + tmp_file_2=/tmp/spdk_tgt_config.json.Q52 00:04:13.950 + ret=0 00:04:13.950 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:14.210 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:14.210 + diff -u /tmp/62.UzF /tmp/spdk_tgt_config.json.Q52 00:04:14.210 + ret=1 00:04:14.210 + echo '=== Start of file: /tmp/62.UzF ===' 00:04:14.210 + cat /tmp/62.UzF 00:04:14.210 + echo '=== End of file: /tmp/62.UzF ===' 00:04:14.210 + echo '' 00:04:14.210 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Q52 ===' 00:04:14.210 + cat /tmp/spdk_tgt_config.json.Q52 00:04:14.210 + echo '=== End of file: /tmp/spdk_tgt_config.json.Q52 ===' 00:04:14.210 + echo '' 00:04:14.210 + rm /tmp/62.UzF /tmp/spdk_tgt_config.json.Q52 00:04:14.210 + exit 1 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:14.210 INFO: configuration change detected. 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 403246 ]] 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.210 13:28:37 json_config -- json_config/json_config.sh@330 -- # killprocess 403246 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@952 -- # '[' -z 403246 ']' 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@956 -- # kill -0 403246 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@957 -- # uname 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:14.210 13:28:37 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 403246 00:04:14.471 13:28:37 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:14.471 13:28:37 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:14.471 13:28:37 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 403246' 00:04:14.471 killing process with pid 403246 00:04:14.471 13:28:37 json_config -- common/autotest_common.sh@971 -- # kill 403246 00:04:14.471 13:28:37 json_config -- common/autotest_common.sh@976 -- # wait 403246 00:04:14.732 13:28:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.732 13:28:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:14.732 13:28:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.732 13:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.732 13:28:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:14.732 13:28:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:14.732 INFO: Success 00:04:14.732 00:04:14.732 real 0m7.361s 00:04:14.732 user 0m8.864s 00:04:14.732 sys 0m1.933s 00:04:14.732 13:28:37 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.732 13:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.732 ************************************ 00:04:14.732 END TEST json_config 00:04:14.732 ************************************ 00:04:14.732 13:28:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.732 13:28:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.732 13:28:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.732 13:28:38 -- common/autotest_common.sh@10 -- # set +x 00:04:14.732 ************************************ 00:04:14.732 START TEST json_config_extra_key 00:04:14.732 ************************************ 00:04:14.732 13:28:38 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.732 13:28:38 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.993 13:28:38 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.993 13:28:38 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.993 13:28:38 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:14.993 13:28:38 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.993 13:28:38 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.993 --rc genhtml_branch_coverage=1 00:04:14.993 --rc genhtml_function_coverage=1 00:04:14.993 --rc genhtml_legend=1 00:04:14.993 --rc geninfo_all_blocks=1 00:04:14.993 --rc geninfo_unexecuted_blocks=1 00:04:14.993 00:04:14.993 ' 00:04:14.993 13:28:38 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.993 --rc genhtml_branch_coverage=1 00:04:14.993 --rc genhtml_function_coverage=1 00:04:14.993 --rc genhtml_legend=1 00:04:14.993 --rc geninfo_all_blocks=1 00:04:14.993 --rc geninfo_unexecuted_blocks=1 00:04:14.993 00:04:14.993 ' 00:04:14.993 13:28:38 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.993 --rc genhtml_branch_coverage=1 00:04:14.993 --rc genhtml_function_coverage=1 00:04:14.993 --rc genhtml_legend=1 00:04:14.993 --rc geninfo_all_blocks=1 00:04:14.993 --rc geninfo_unexecuted_blocks=1 00:04:14.993 00:04:14.993 ' 00:04:14.993 13:28:38 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.993 --rc genhtml_branch_coverage=1 00:04:14.993 --rc genhtml_function_coverage=1 00:04:14.993 --rc genhtml_legend=1 00:04:14.993 --rc geninfo_all_blocks=1 00:04:14.993 --rc geninfo_unexecuted_blocks=1 00:04:14.993 00:04:14.993 ' 00:04:14.993 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.993 13:28:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.993 13:28:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.993 13:28:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.993 13:28:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.993 13:28:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:14.993 13:28:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:14.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:14.993 13:28:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:14.993 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:14.994 INFO: launching applications... 00:04:14.994 13:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=404037 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.994 Waiting for target to run... 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 404037 /var/tmp/spdk_tgt.sock 00:04:14.994 13:28:38 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 404037 ']' 00:04:14.994 13:28:38 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.994 13:28:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.994 13:28:38 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:14.994 13:28:38 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.994 13:28:38 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:14.994 13:28:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:14.994 [2024-11-06 13:28:38.313215] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:14.994 [2024-11-06 13:28:38.313297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404037 ] 00:04:15.254 [2024-11-06 13:28:38.628145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.515 [2024-11-06 13:28:38.661090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.775 13:28:39 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:15.775 13:28:39 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:15.775 13:28:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:15.775 00:04:15.775 13:28:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:15.775 INFO: shutting down applications... 00:04:15.775 13:28:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:15.775 13:28:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:15.775 13:28:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:15.775 13:28:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 404037 ]] 00:04:15.775 13:28:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 404037 00:04:15.775 13:28:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:15.775 13:28:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.775 13:28:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 404037 00:04:15.775 13:28:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:16.346 13:28:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:16.346 13:28:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:16.346 13:28:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 404037 00:04:16.346 13:28:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:16.346 13:28:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:16.346 13:28:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:16.346 13:28:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:16.346 SPDK target shutdown done 00:04:16.346 13:28:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:16.346 Success 00:04:16.347 00:04:16.347 real 0m1.572s 00:04:16.347 user 0m1.173s 00:04:16.347 sys 0m0.451s 00:04:16.347 13:28:39 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.347 13:28:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:16.347 ************************************ 00:04:16.347 END TEST json_config_extra_key 00:04:16.347 ************************************ 00:04:16.347 13:28:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:16.347 13:28:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.347 13:28:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.347 13:28:39 -- common/autotest_common.sh@10 -- # set +x 00:04:16.347 ************************************ 00:04:16.347 START TEST alias_rpc 00:04:16.347 ************************************ 00:04:16.347 13:28:39 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:16.608 * Looking for test storage... 00:04:16.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.608 13:28:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.608 --rc genhtml_branch_coverage=1 00:04:16.608 --rc genhtml_function_coverage=1 00:04:16.608 --rc genhtml_legend=1 00:04:16.608 --rc geninfo_all_blocks=1 00:04:16.608 --rc geninfo_unexecuted_blocks=1 00:04:16.608 00:04:16.608 ' 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.608 --rc genhtml_branch_coverage=1 00:04:16.608 --rc genhtml_function_coverage=1 00:04:16.608 --rc genhtml_legend=1 00:04:16.608 --rc geninfo_all_blocks=1 00:04:16.608 --rc geninfo_unexecuted_blocks=1 00:04:16.608 00:04:16.608 ' 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.608 --rc genhtml_branch_coverage=1 00:04:16.608 --rc genhtml_function_coverage=1 00:04:16.608 --rc genhtml_legend=1 00:04:16.608 --rc geninfo_all_blocks=1 00:04:16.608 --rc geninfo_unexecuted_blocks=1 00:04:16.608 00:04:16.608 ' 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.608 --rc genhtml_branch_coverage=1 00:04:16.608 --rc genhtml_function_coverage=1 00:04:16.608 --rc genhtml_legend=1 00:04:16.608 --rc geninfo_all_blocks=1 00:04:16.608 --rc geninfo_unexecuted_blocks=1 00:04:16.608 00:04:16.608 ' 00:04:16.608 13:28:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:16.608 13:28:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=404425 00:04:16.608 13:28:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 404425 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 404425 ']' 00:04:16.608 13:28:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:16.608 13:28:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.608 [2024-11-06 13:28:39.954350] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:16.608 [2024-11-06 13:28:39.954427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404425 ] 00:04:16.868 [2024-11-06 13:28:40.032068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.868 [2024-11-06 13:28:40.077111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.440 13:28:40 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:17.440 13:28:40 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:17.440 13:28:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:17.700 13:28:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 404425 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 404425 ']' 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 404425 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 404425 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 404425' 00:04:17.700 killing process with pid 404425 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@971 -- # kill 404425 00:04:17.700 13:28:40 alias_rpc -- common/autotest_common.sh@976 -- # wait 404425 00:04:17.960 00:04:17.960 real 0m1.521s 00:04:17.960 user 0m1.674s 00:04:17.960 sys 0m0.412s 00:04:17.960 13:28:41 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:17.960 13:28:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.960 ************************************ 00:04:17.960 END TEST alias_rpc 00:04:17.960 ************************************ 00:04:17.960 13:28:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:17.960 13:28:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:17.960 13:28:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:17.960 13:28:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:17.960 13:28:41 -- common/autotest_common.sh@10 -- # set +x 00:04:17.960 ************************************ 00:04:17.960 START TEST spdkcli_tcp 00:04:17.960 ************************************ 00:04:17.960 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:18.221 * Looking for test storage... 00:04:18.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:18.221 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:18.221 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:18.221 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:18.221 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.221 13:28:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:18.221 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.221 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:18.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.222 --rc genhtml_branch_coverage=1 00:04:18.222 --rc genhtml_function_coverage=1 00:04:18.222 --rc genhtml_legend=1 00:04:18.222 --rc geninfo_all_blocks=1 00:04:18.222 --rc geninfo_unexecuted_blocks=1 00:04:18.222 00:04:18.222 ' 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:18.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.222 --rc genhtml_branch_coverage=1 00:04:18.222 --rc genhtml_function_coverage=1 00:04:18.222 --rc genhtml_legend=1 00:04:18.222 --rc geninfo_all_blocks=1 00:04:18.222 --rc geninfo_unexecuted_blocks=1 00:04:18.222 00:04:18.222 ' 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:18.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.222 --rc genhtml_branch_coverage=1 00:04:18.222 --rc genhtml_function_coverage=1 00:04:18.222 --rc genhtml_legend=1 00:04:18.222 --rc geninfo_all_blocks=1 00:04:18.222 --rc geninfo_unexecuted_blocks=1 00:04:18.222 00:04:18.222 ' 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:18.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.222 --rc genhtml_branch_coverage=1 00:04:18.222 --rc genhtml_function_coverage=1 00:04:18.222 --rc genhtml_legend=1 00:04:18.222 --rc geninfo_all_blocks=1 00:04:18.222 --rc geninfo_unexecuted_blocks=1 00:04:18.222 00:04:18.222 ' 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=404826 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 404826 00:04:18.222 13:28:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 404826 ']' 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:18.222 13:28:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.222 [2024-11-06 13:28:41.540541] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:18.222 [2024-11-06 13:28:41.540597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404826 ] 00:04:18.482 [2024-11-06 13:28:41.611988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.482 [2024-11-06 13:28:41.650060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.482 [2024-11-06 13:28:41.650151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.052 13:28:42 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:19.052 13:28:42 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:19.052 13:28:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=404844 00:04:19.052 13:28:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:19.052 13:28:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:19.313 [ 00:04:19.313 "bdev_malloc_delete", 00:04:19.313 "bdev_malloc_create", 00:04:19.314 "bdev_null_resize", 00:04:19.314 "bdev_null_delete", 00:04:19.314 "bdev_null_create", 00:04:19.314 "bdev_nvme_cuse_unregister", 00:04:19.314 "bdev_nvme_cuse_register", 00:04:19.314 "bdev_opal_new_user", 00:04:19.314 "bdev_opal_set_lock_state", 00:04:19.314 "bdev_opal_delete", 00:04:19.314 "bdev_opal_get_info", 00:04:19.314 "bdev_opal_create", 00:04:19.314 "bdev_nvme_opal_revert", 00:04:19.314 "bdev_nvme_opal_init", 00:04:19.314 "bdev_nvme_send_cmd", 00:04:19.314 "bdev_nvme_set_keys", 00:04:19.314 "bdev_nvme_get_path_iostat", 00:04:19.314 "bdev_nvme_get_mdns_discovery_info", 00:04:19.314 "bdev_nvme_stop_mdns_discovery", 00:04:19.314 "bdev_nvme_start_mdns_discovery", 00:04:19.314 "bdev_nvme_set_multipath_policy", 00:04:19.314 "bdev_nvme_set_preferred_path", 00:04:19.314 "bdev_nvme_get_io_paths", 00:04:19.314 "bdev_nvme_remove_error_injection", 00:04:19.314 "bdev_nvme_add_error_injection", 00:04:19.314 "bdev_nvme_get_discovery_info", 00:04:19.314 "bdev_nvme_stop_discovery", 00:04:19.314 "bdev_nvme_start_discovery", 00:04:19.314 "bdev_nvme_get_controller_health_info", 00:04:19.314 "bdev_nvme_disable_controller", 00:04:19.314 "bdev_nvme_enable_controller", 00:04:19.314 "bdev_nvme_reset_controller", 00:04:19.314 "bdev_nvme_get_transport_statistics", 00:04:19.314 "bdev_nvme_apply_firmware", 00:04:19.314 "bdev_nvme_detach_controller", 00:04:19.314 "bdev_nvme_get_controllers", 00:04:19.314 "bdev_nvme_attach_controller", 00:04:19.314 "bdev_nvme_set_hotplug", 00:04:19.314 "bdev_nvme_set_options", 00:04:19.314 "bdev_passthru_delete", 00:04:19.314 "bdev_passthru_create", 00:04:19.314 "bdev_lvol_set_parent_bdev", 00:04:19.314 "bdev_lvol_set_parent", 00:04:19.314 "bdev_lvol_check_shallow_copy", 00:04:19.314 "bdev_lvol_start_shallow_copy", 00:04:19.314 "bdev_lvol_grow_lvstore", 00:04:19.314 "bdev_lvol_get_lvols", 00:04:19.314 "bdev_lvol_get_lvstores", 00:04:19.314 "bdev_lvol_delete", 00:04:19.314 "bdev_lvol_set_read_only", 00:04:19.314 "bdev_lvol_resize", 00:04:19.314 "bdev_lvol_decouple_parent", 00:04:19.314 "bdev_lvol_inflate", 00:04:19.314 "bdev_lvol_rename", 00:04:19.314 "bdev_lvol_clone_bdev", 00:04:19.314 "bdev_lvol_clone", 00:04:19.314 "bdev_lvol_snapshot", 00:04:19.314 "bdev_lvol_create", 00:04:19.314 "bdev_lvol_delete_lvstore", 00:04:19.314 "bdev_lvol_rename_lvstore", 00:04:19.314 "bdev_lvol_create_lvstore", 00:04:19.314 "bdev_raid_set_options", 00:04:19.314 "bdev_raid_remove_base_bdev", 00:04:19.314 "bdev_raid_add_base_bdev", 00:04:19.314 "bdev_raid_delete", 00:04:19.314 "bdev_raid_create", 00:04:19.314 "bdev_raid_get_bdevs", 00:04:19.314 "bdev_error_inject_error", 00:04:19.314 "bdev_error_delete", 00:04:19.314 "bdev_error_create", 00:04:19.314 "bdev_split_delete", 00:04:19.314 "bdev_split_create", 00:04:19.314 "bdev_delay_delete", 00:04:19.314 "bdev_delay_create", 00:04:19.314 "bdev_delay_update_latency", 00:04:19.314 "bdev_zone_block_delete", 00:04:19.314 "bdev_zone_block_create", 00:04:19.314 "blobfs_create", 00:04:19.314 "blobfs_detect", 00:04:19.314 "blobfs_set_cache_size", 00:04:19.314 "bdev_aio_delete", 00:04:19.314 "bdev_aio_rescan", 00:04:19.314 "bdev_aio_create", 00:04:19.314 "bdev_ftl_set_property", 00:04:19.314 "bdev_ftl_get_properties", 00:04:19.314 "bdev_ftl_get_stats", 00:04:19.314 "bdev_ftl_unmap", 00:04:19.314 "bdev_ftl_unload", 00:04:19.314 "bdev_ftl_delete", 00:04:19.314 "bdev_ftl_load", 00:04:19.314 "bdev_ftl_create", 00:04:19.314 "bdev_virtio_attach_controller", 00:04:19.314 "bdev_virtio_scsi_get_devices", 00:04:19.314 "bdev_virtio_detach_controller", 00:04:19.314 "bdev_virtio_blk_set_hotplug", 00:04:19.314 "bdev_iscsi_delete", 00:04:19.314 "bdev_iscsi_create", 00:04:19.314 "bdev_iscsi_set_options", 00:04:19.314 "accel_error_inject_error", 00:04:19.314 "ioat_scan_accel_module", 00:04:19.314 "dsa_scan_accel_module", 00:04:19.314 "iaa_scan_accel_module", 00:04:19.314 "vfu_virtio_create_fs_endpoint", 00:04:19.314 "vfu_virtio_create_scsi_endpoint", 00:04:19.314 "vfu_virtio_scsi_remove_target", 00:04:19.314 "vfu_virtio_scsi_add_target", 00:04:19.314 "vfu_virtio_create_blk_endpoint", 00:04:19.314 "vfu_virtio_delete_endpoint", 00:04:19.314 "keyring_file_remove_key", 00:04:19.314 "keyring_file_add_key", 00:04:19.314 "keyring_linux_set_options", 00:04:19.314 "fsdev_aio_delete", 00:04:19.314 "fsdev_aio_create", 00:04:19.314 "iscsi_get_histogram", 00:04:19.314 "iscsi_enable_histogram", 00:04:19.314 "iscsi_set_options", 00:04:19.314 "iscsi_get_auth_groups", 00:04:19.314 "iscsi_auth_group_remove_secret", 00:04:19.314 "iscsi_auth_group_add_secret", 00:04:19.314 "iscsi_delete_auth_group", 00:04:19.314 "iscsi_create_auth_group", 00:04:19.314 "iscsi_set_discovery_auth", 00:04:19.314 "iscsi_get_options", 00:04:19.314 "iscsi_target_node_request_logout", 00:04:19.314 "iscsi_target_node_set_redirect", 00:04:19.314 "iscsi_target_node_set_auth", 00:04:19.314 "iscsi_target_node_add_lun", 00:04:19.314 "iscsi_get_stats", 00:04:19.314 "iscsi_get_connections", 00:04:19.314 "iscsi_portal_group_set_auth", 00:04:19.314 "iscsi_start_portal_group", 00:04:19.314 "iscsi_delete_portal_group", 00:04:19.314 "iscsi_create_portal_group", 00:04:19.314 "iscsi_get_portal_groups", 00:04:19.314 "iscsi_delete_target_node", 00:04:19.314 "iscsi_target_node_remove_pg_ig_maps", 00:04:19.314 "iscsi_target_node_add_pg_ig_maps", 00:04:19.314 "iscsi_create_target_node", 00:04:19.314 "iscsi_get_target_nodes", 00:04:19.314 "iscsi_delete_initiator_group", 00:04:19.314 "iscsi_initiator_group_remove_initiators", 00:04:19.314 "iscsi_initiator_group_add_initiators", 00:04:19.314 "iscsi_create_initiator_group", 00:04:19.314 "iscsi_get_initiator_groups", 00:04:19.314 "nvmf_set_crdt", 00:04:19.314 "nvmf_set_config", 00:04:19.314 "nvmf_set_max_subsystems", 00:04:19.314 "nvmf_stop_mdns_prr", 00:04:19.314 "nvmf_publish_mdns_prr", 00:04:19.314 "nvmf_subsystem_get_listeners", 00:04:19.314 "nvmf_subsystem_get_qpairs", 00:04:19.314 "nvmf_subsystem_get_controllers", 00:04:19.314 "nvmf_get_stats", 00:04:19.314 "nvmf_get_transports", 00:04:19.314 "nvmf_create_transport", 00:04:19.314 "nvmf_get_targets", 00:04:19.314 "nvmf_delete_target", 00:04:19.314 "nvmf_create_target", 00:04:19.314 "nvmf_subsystem_allow_any_host", 00:04:19.314 "nvmf_subsystem_set_keys", 00:04:19.314 "nvmf_subsystem_remove_host", 00:04:19.314 "nvmf_subsystem_add_host", 00:04:19.314 "nvmf_ns_remove_host", 00:04:19.314 "nvmf_ns_add_host", 00:04:19.314 "nvmf_subsystem_remove_ns", 00:04:19.314 "nvmf_subsystem_set_ns_ana_group", 00:04:19.314 "nvmf_subsystem_add_ns", 00:04:19.314 "nvmf_subsystem_listener_set_ana_state", 00:04:19.314 "nvmf_discovery_get_referrals", 00:04:19.314 "nvmf_discovery_remove_referral", 00:04:19.314 "nvmf_discovery_add_referral", 00:04:19.314 "nvmf_subsystem_remove_listener", 00:04:19.314 "nvmf_subsystem_add_listener", 00:04:19.314 "nvmf_delete_subsystem", 00:04:19.314 "nvmf_create_subsystem", 00:04:19.314 "nvmf_get_subsystems", 00:04:19.314 "env_dpdk_get_mem_stats", 00:04:19.314 "nbd_get_disks", 00:04:19.314 "nbd_stop_disk", 00:04:19.314 "nbd_start_disk", 00:04:19.314 "ublk_recover_disk", 00:04:19.314 "ublk_get_disks", 00:04:19.314 "ublk_stop_disk", 00:04:19.314 "ublk_start_disk", 00:04:19.314 "ublk_destroy_target", 00:04:19.314 "ublk_create_target", 00:04:19.314 "virtio_blk_create_transport", 00:04:19.314 "virtio_blk_get_transports", 00:04:19.314 "vhost_controller_set_coalescing", 00:04:19.314 "vhost_get_controllers", 00:04:19.314 "vhost_delete_controller", 00:04:19.314 "vhost_create_blk_controller", 00:04:19.314 "vhost_scsi_controller_remove_target", 00:04:19.314 "vhost_scsi_controller_add_target", 00:04:19.314 "vhost_start_scsi_controller", 00:04:19.314 "vhost_create_scsi_controller", 00:04:19.314 "thread_set_cpumask", 00:04:19.314 "scheduler_set_options", 00:04:19.315 "framework_get_governor", 00:04:19.315 "framework_get_scheduler", 00:04:19.315 "framework_set_scheduler", 00:04:19.315 "framework_get_reactors", 00:04:19.315 "thread_get_io_channels", 00:04:19.315 "thread_get_pollers", 00:04:19.315 "thread_get_stats", 00:04:19.315 "framework_monitor_context_switch", 00:04:19.315 "spdk_kill_instance", 00:04:19.315 "log_enable_timestamps", 00:04:19.315 "log_get_flags", 00:04:19.315 "log_clear_flag", 00:04:19.315 "log_set_flag", 00:04:19.315 "log_get_level", 00:04:19.315 "log_set_level", 00:04:19.315 "log_get_print_level", 00:04:19.315 "log_set_print_level", 00:04:19.315 "framework_enable_cpumask_locks", 00:04:19.315 "framework_disable_cpumask_locks", 00:04:19.315 "framework_wait_init", 00:04:19.315 "framework_start_init", 00:04:19.315 "scsi_get_devices", 00:04:19.315 "bdev_get_histogram", 00:04:19.315 "bdev_enable_histogram", 00:04:19.315 "bdev_set_qos_limit", 00:04:19.315 "bdev_set_qd_sampling_period", 00:04:19.315 "bdev_get_bdevs", 00:04:19.315 "bdev_reset_iostat", 00:04:19.315 "bdev_get_iostat", 00:04:19.315 "bdev_examine", 00:04:19.315 "bdev_wait_for_examine", 00:04:19.315 "bdev_set_options", 00:04:19.315 "accel_get_stats", 00:04:19.315 "accel_set_options", 00:04:19.315 "accel_set_driver", 00:04:19.315 "accel_crypto_key_destroy", 00:04:19.315 "accel_crypto_keys_get", 00:04:19.315 "accel_crypto_key_create", 00:04:19.315 "accel_assign_opc", 00:04:19.315 "accel_get_module_info", 00:04:19.315 "accel_get_opc_assignments", 00:04:19.315 "vmd_rescan", 00:04:19.315 "vmd_remove_device", 00:04:19.315 "vmd_enable", 00:04:19.315 "sock_get_default_impl", 00:04:19.315 "sock_set_default_impl", 00:04:19.315 "sock_impl_set_options", 00:04:19.315 "sock_impl_get_options", 00:04:19.315 "iobuf_get_stats", 00:04:19.315 "iobuf_set_options", 00:04:19.315 "keyring_get_keys", 00:04:19.315 "vfu_tgt_set_base_path", 00:04:19.315 "framework_get_pci_devices", 00:04:19.315 "framework_get_config", 00:04:19.315 "framework_get_subsystems", 00:04:19.315 "fsdev_set_opts", 00:04:19.315 "fsdev_get_opts", 00:04:19.315 "trace_get_info", 00:04:19.315 "trace_get_tpoint_group_mask", 00:04:19.315 "trace_disable_tpoint_group", 00:04:19.315 "trace_enable_tpoint_group", 00:04:19.315 "trace_clear_tpoint_mask", 00:04:19.315 "trace_set_tpoint_mask", 00:04:19.315 "notify_get_notifications", 00:04:19.315 "notify_get_types", 00:04:19.315 "spdk_get_version", 00:04:19.315 "rpc_get_methods" 00:04:19.315 ] 00:04:19.315 13:28:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:19.315 13:28:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:19.315 13:28:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 404826 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 404826 ']' 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 404826 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 404826 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 404826' 00:04:19.315 killing process with pid 404826 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 404826 00:04:19.315 13:28:42 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 404826 00:04:19.576 00:04:19.576 real 0m1.523s 00:04:19.576 user 0m2.800s 00:04:19.576 sys 0m0.433s 00:04:19.576 13:28:42 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:19.576 13:28:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:19.576 ************************************ 00:04:19.576 END TEST spdkcli_tcp 00:04:19.576 ************************************ 00:04:19.576 13:28:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.576 13:28:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:19.576 13:28:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.576 13:28:42 -- common/autotest_common.sh@10 -- # set +x 00:04:19.576 ************************************ 00:04:19.576 START TEST dpdk_mem_utility 00:04:19.576 ************************************ 00:04:19.576 13:28:42 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.837 * Looking for test storage... 00:04:19.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:19.837 13:28:42 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:19.837 13:28:42 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:19.837 13:28:42 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.837 13:28:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:19.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.837 --rc genhtml_branch_coverage=1 00:04:19.837 --rc genhtml_function_coverage=1 00:04:19.837 --rc genhtml_legend=1 00:04:19.837 --rc geninfo_all_blocks=1 00:04:19.837 --rc geninfo_unexecuted_blocks=1 00:04:19.837 00:04:19.837 ' 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:19.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.837 --rc genhtml_branch_coverage=1 00:04:19.837 --rc genhtml_function_coverage=1 00:04:19.837 --rc genhtml_legend=1 00:04:19.837 --rc geninfo_all_blocks=1 00:04:19.837 --rc geninfo_unexecuted_blocks=1 00:04:19.837 00:04:19.837 ' 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:19.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.837 --rc genhtml_branch_coverage=1 00:04:19.837 --rc genhtml_function_coverage=1 00:04:19.837 --rc genhtml_legend=1 00:04:19.837 --rc geninfo_all_blocks=1 00:04:19.837 --rc geninfo_unexecuted_blocks=1 00:04:19.837 00:04:19.837 ' 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:19.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.837 --rc genhtml_branch_coverage=1 00:04:19.837 --rc genhtml_function_coverage=1 00:04:19.837 --rc genhtml_legend=1 00:04:19.837 --rc geninfo_all_blocks=1 00:04:19.837 --rc geninfo_unexecuted_blocks=1 00:04:19.837 00:04:19.837 ' 00:04:19.837 13:28:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:19.837 13:28:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=405229 00:04:19.837 13:28:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 405229 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 405229 ']' 00:04:19.837 13:28:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:19.837 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.837 [2024-11-06 13:28:43.136911] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:19.837 [2024-11-06 13:28:43.136990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405229 ] 00:04:20.098 [2024-11-06 13:28:43.211944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.098 [2024-11-06 13:28:43.254183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.669 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:20.669 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:20.669 13:28:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:20.669 13:28:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:20.669 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.669 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.669 { 00:04:20.669 "filename": "/tmp/spdk_mem_dump.txt" 00:04:20.669 } 00:04:20.669 13:28:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.669 13:28:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:20.669 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:20.669 1 heaps totaling size 810.000000 MiB 00:04:20.669 size: 810.000000 MiB heap id: 0 00:04:20.669 end heaps---------- 00:04:20.669 9 mempools totaling size 595.772034 MiB 00:04:20.669 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:20.669 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:20.669 size: 92.545471 MiB name: bdev_io_405229 00:04:20.669 size: 50.003479 MiB name: msgpool_405229 00:04:20.669 size: 36.509338 MiB name: fsdev_io_405229 00:04:20.669 size: 21.763794 MiB name: PDU_Pool 00:04:20.669 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:20.669 size: 4.133484 MiB name: evtpool_405229 00:04:20.669 size: 0.026123 MiB name: Session_Pool 00:04:20.669 end mempools------- 00:04:20.669 6 memzones totaling size 4.142822 MiB 00:04:20.669 size: 1.000366 MiB name: RG_ring_0_405229 00:04:20.669 size: 1.000366 MiB name: RG_ring_1_405229 00:04:20.669 size: 1.000366 MiB name: RG_ring_4_405229 00:04:20.669 size: 1.000366 MiB name: RG_ring_5_405229 00:04:20.669 size: 0.125366 MiB name: RG_ring_2_405229 00:04:20.669 size: 0.015991 MiB name: RG_ring_3_405229 00:04:20.669 end memzones------- 00:04:20.669 13:28:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:20.669 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:20.669 list of free elements. size: 10.862488 MiB 00:04:20.669 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:20.669 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:20.669 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:20.669 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:20.669 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:20.669 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:20.669 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:20.669 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:20.669 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:20.669 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:20.669 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:20.669 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:20.669 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:20.669 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:20.669 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:20.669 list of standard malloc elements. size: 199.218628 MiB 00:04:20.669 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:20.669 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:20.669 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:20.669 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:20.669 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:20.669 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:20.669 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:20.669 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:20.669 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:20.669 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:20.669 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:20.669 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:20.669 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:20.669 list of memzone associated elements. size: 599.918884 MiB 00:04:20.669 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:20.669 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:20.669 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:20.669 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:20.669 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:20.669 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_405229_0 00:04:20.669 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:20.669 associated memzone info: size: 48.002930 MiB name: MP_msgpool_405229_0 00:04:20.669 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:20.669 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_405229_0 00:04:20.669 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:20.669 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:20.669 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:20.669 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:20.669 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:20.669 associated memzone info: size: 3.000122 MiB name: MP_evtpool_405229_0 00:04:20.669 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:20.670 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_405229 00:04:20.670 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:20.670 associated memzone info: size: 1.007996 MiB name: MP_evtpool_405229 00:04:20.670 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:20.670 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:20.670 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:20.670 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:20.670 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:20.670 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:20.670 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:20.670 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:20.670 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:20.670 associated memzone info: size: 1.000366 MiB name: RG_ring_0_405229 00:04:20.670 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:20.670 associated memzone info: size: 1.000366 MiB name: RG_ring_1_405229 00:04:20.670 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:20.670 associated memzone info: size: 1.000366 MiB name: RG_ring_4_405229 00:04:20.670 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:20.670 associated memzone info: size: 1.000366 MiB name: RG_ring_5_405229 00:04:20.670 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:20.670 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_405229 00:04:20.670 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:20.670 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_405229 00:04:20.670 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:20.670 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:20.670 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:20.670 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:20.670 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:20.670 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:20.670 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:20.670 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_405229 00:04:20.670 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:20.670 associated memzone info: size: 0.125366 MiB name: RG_ring_2_405229 00:04:20.670 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:20.670 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:20.670 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:20.670 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:20.670 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:20.670 associated memzone info: size: 0.015991 MiB name: RG_ring_3_405229 00:04:20.670 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:20.670 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:20.670 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:20.670 associated memzone info: size: 0.000183 MiB name: MP_msgpool_405229 00:04:20.670 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:20.670 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_405229 00:04:20.670 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:20.670 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_405229 00:04:20.670 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:20.670 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:20.670 13:28:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:20.670 13:28:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 405229 00:04:20.670 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 405229 ']' 00:04:20.670 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 405229 00:04:20.670 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:20.670 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:20.930 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 405229 00:04:20.930 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:20.930 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:20.930 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 405229' 00:04:20.930 killing process with pid 405229 00:04:20.930 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 405229 00:04:20.930 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 405229 00:04:20.930 00:04:20.930 real 0m1.425s 00:04:20.930 user 0m1.525s 00:04:20.930 sys 0m0.396s 00:04:20.930 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:20.930 13:28:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.930 ************************************ 00:04:20.930 END TEST dpdk_mem_utility 00:04:20.930 ************************************ 00:04:21.191 13:28:44 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:21.191 13:28:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.191 13:28:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.191 13:28:44 -- common/autotest_common.sh@10 -- # set +x 00:04:21.191 ************************************ 00:04:21.191 START TEST event 00:04:21.191 ************************************ 00:04:21.191 13:28:44 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:21.191 * Looking for test storage... 00:04:21.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:21.191 13:28:44 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:21.191 13:28:44 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:21.191 13:28:44 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:21.191 13:28:44 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:21.191 13:28:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.191 13:28:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.191 13:28:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.191 13:28:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.192 13:28:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.192 13:28:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.192 13:28:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.192 13:28:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.192 13:28:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.192 13:28:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.192 13:28:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.192 13:28:44 event -- scripts/common.sh@344 -- # case "$op" in 00:04:21.192 13:28:44 event -- scripts/common.sh@345 -- # : 1 00:04:21.192 13:28:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.192 13:28:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.452 13:28:44 event -- scripts/common.sh@365 -- # decimal 1 00:04:21.452 13:28:44 event -- scripts/common.sh@353 -- # local d=1 00:04:21.452 13:28:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.452 13:28:44 event -- scripts/common.sh@355 -- # echo 1 00:04:21.452 13:28:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.452 13:28:44 event -- scripts/common.sh@366 -- # decimal 2 00:04:21.452 13:28:44 event -- scripts/common.sh@353 -- # local d=2 00:04:21.452 13:28:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.452 13:28:44 event -- scripts/common.sh@355 -- # echo 2 00:04:21.452 13:28:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.452 13:28:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.452 13:28:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.452 13:28:44 event -- scripts/common.sh@368 -- # return 0 00:04:21.452 13:28:44 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.452 13:28:44 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:21.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.452 --rc genhtml_branch_coverage=1 00:04:21.452 --rc genhtml_function_coverage=1 00:04:21.452 --rc genhtml_legend=1 00:04:21.452 --rc geninfo_all_blocks=1 00:04:21.452 --rc geninfo_unexecuted_blocks=1 00:04:21.452 00:04:21.452 ' 00:04:21.452 13:28:44 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:21.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.452 --rc genhtml_branch_coverage=1 00:04:21.452 --rc genhtml_function_coverage=1 00:04:21.452 --rc genhtml_legend=1 00:04:21.452 --rc geninfo_all_blocks=1 00:04:21.452 --rc geninfo_unexecuted_blocks=1 00:04:21.452 00:04:21.452 ' 00:04:21.452 13:28:44 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:21.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.452 --rc genhtml_branch_coverage=1 00:04:21.452 --rc genhtml_function_coverage=1 00:04:21.452 --rc genhtml_legend=1 00:04:21.452 --rc geninfo_all_blocks=1 00:04:21.452 --rc geninfo_unexecuted_blocks=1 00:04:21.452 00:04:21.452 ' 00:04:21.452 13:28:44 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:21.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.452 --rc genhtml_branch_coverage=1 00:04:21.452 --rc genhtml_function_coverage=1 00:04:21.452 --rc genhtml_legend=1 00:04:21.452 --rc geninfo_all_blocks=1 00:04:21.452 --rc geninfo_unexecuted_blocks=1 00:04:21.452 00:04:21.452 ' 00:04:21.452 13:28:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:21.452 13:28:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:21.452 13:28:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:21.452 13:28:44 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:21.453 13:28:44 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.453 13:28:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.453 ************************************ 00:04:21.453 START TEST event_perf 00:04:21.453 ************************************ 00:04:21.453 13:28:44 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:21.453 Running I/O for 1 seconds...[2024-11-06 13:28:44.639452] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:21.453 [2024-11-06 13:28:44.639568] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405547 ] 00:04:21.453 [2024-11-06 13:28:44.719533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:21.453 [2024-11-06 13:28:44.760544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.453 [2024-11-06 13:28:44.760659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:21.453 [2024-11-06 13:28:44.760817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.453 Running I/O for 1 seconds...[2024-11-06 13:28:44.760817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:22.838 00:04:22.838 lcore 0: 179897 00:04:22.838 lcore 1: 179897 00:04:22.838 lcore 2: 179893 00:04:22.838 lcore 3: 179895 00:04:22.838 done. 00:04:22.838 00:04:22.838 real 0m1.177s 00:04:22.838 user 0m4.109s 00:04:22.838 sys 0m0.067s 00:04:22.838 13:28:45 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.838 13:28:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:22.838 ************************************ 00:04:22.838 END TEST event_perf 00:04:22.838 ************************************ 00:04:22.838 13:28:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:22.838 13:28:45 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:22.838 13:28:45 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.838 13:28:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.838 ************************************ 00:04:22.838 START TEST event_reactor 00:04:22.838 ************************************ 00:04:22.838 13:28:45 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:22.838 [2024-11-06 13:28:45.896431] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:22.838 [2024-11-06 13:28:45.896515] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405697 ] 00:04:22.838 [2024-11-06 13:28:45.973166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.838 [2024-11-06 13:28:46.010124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.780 test_start 00:04:23.780 oneshot 00:04:23.780 tick 100 00:04:23.780 tick 100 00:04:23.780 tick 250 00:04:23.780 tick 100 00:04:23.780 tick 100 00:04:23.780 tick 100 00:04:23.780 tick 250 00:04:23.780 tick 500 00:04:23.780 tick 100 00:04:23.780 tick 100 00:04:23.780 tick 250 00:04:23.780 tick 100 00:04:23.780 tick 100 00:04:23.780 test_end 00:04:23.780 00:04:23.780 real 0m1.167s 00:04:23.780 user 0m1.094s 00:04:23.780 sys 0m0.070s 00:04:23.780 13:28:47 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.780 13:28:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:23.780 ************************************ 00:04:23.780 END TEST event_reactor 00:04:23.780 ************************************ 00:04:23.780 13:28:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.780 13:28:47 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:23.780 13:28:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.780 13:28:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.780 ************************************ 00:04:23.780 START TEST event_reactor_perf 00:04:23.780 ************************************ 00:04:23.780 13:28:47 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.780 [2024-11-06 13:28:47.141219] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:23.780 [2024-11-06 13:28:47.141303] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406030 ] 00:04:24.040 [2024-11-06 13:28:47.215797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.040 [2024-11-06 13:28:47.251037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.982 test_start 00:04:24.982 test_end 00:04:24.982 Performance: 367583 events per second 00:04:24.982 00:04:24.982 real 0m1.163s 00:04:24.982 user 0m1.091s 00:04:24.982 sys 0m0.068s 00:04:24.982 13:28:48 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.982 13:28:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:24.982 ************************************ 00:04:24.982 END TEST event_reactor_perf 00:04:24.982 ************************************ 00:04:24.982 13:28:48 event -- event/event.sh@49 -- # uname -s 00:04:24.982 13:28:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:24.982 13:28:48 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:24.982 13:28:48 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.982 13:28:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.982 13:28:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.243 ************************************ 00:04:25.243 START TEST event_scheduler 00:04:25.243 ************************************ 00:04:25.243 13:28:48 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:25.243 * Looking for test storage... 00:04:25.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:25.243 13:28:48 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:25.243 13:28:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:25.243 13:28:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:25.243 13:28:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.243 13:28:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:25.243 13:28:48 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.243 13:28:48 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:25.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.243 --rc genhtml_branch_coverage=1 00:04:25.243 --rc genhtml_function_coverage=1 00:04:25.243 --rc genhtml_legend=1 00:04:25.243 --rc geninfo_all_blocks=1 00:04:25.243 --rc geninfo_unexecuted_blocks=1 00:04:25.243 00:04:25.243 ' 00:04:25.243 13:28:48 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:25.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.244 --rc genhtml_branch_coverage=1 00:04:25.244 --rc genhtml_function_coverage=1 00:04:25.244 --rc genhtml_legend=1 00:04:25.244 --rc geninfo_all_blocks=1 00:04:25.244 --rc geninfo_unexecuted_blocks=1 00:04:25.244 00:04:25.244 ' 00:04:25.244 13:28:48 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.244 --rc genhtml_branch_coverage=1 00:04:25.244 --rc genhtml_function_coverage=1 00:04:25.244 --rc genhtml_legend=1 00:04:25.244 --rc geninfo_all_blocks=1 00:04:25.244 --rc geninfo_unexecuted_blocks=1 00:04:25.244 00:04:25.244 ' 00:04:25.244 13:28:48 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.244 --rc genhtml_branch_coverage=1 00:04:25.244 --rc genhtml_function_coverage=1 00:04:25.244 --rc genhtml_legend=1 00:04:25.244 --rc geninfo_all_blocks=1 00:04:25.244 --rc geninfo_unexecuted_blocks=1 00:04:25.244 00:04:25.244 ' 00:04:25.244 13:28:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:25.244 13:28:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=406420 00:04:25.244 13:28:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.244 13:28:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:25.244 13:28:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 406420 00:04:25.244 13:28:48 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 406420 ']' 00:04:25.244 13:28:48 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.244 13:28:48 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:25.244 13:28:48 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.244 13:28:48 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:25.244 13:28:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.244 [2024-11-06 13:28:48.610895] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:25.244 [2024-11-06 13:28:48.610966] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406420 ] 00:04:25.504 [2024-11-06 13:28:48.670380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.504 [2024-11-06 13:28:48.701515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.504 [2024-11-06 13:28:48.701666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.504 [2024-11-06 13:28:48.701817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.504 [2024-11-06 13:28:48.701818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:25.504 13:28:48 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:25.504 13:28:48 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:25.504 13:28:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:25.504 13:28:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.504 13:28:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.504 [2024-11-06 13:28:48.762283] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:25.504 [2024-11-06 13:28:48.762297] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:25.504 [2024-11-06 13:28:48.762305] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:25.504 [2024-11-06 13:28:48.762309] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:25.504 [2024-11-06 13:28:48.762313] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:25.504 13:28:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.504 13:28:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:25.504 13:28:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.505 13:28:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.505 [2024-11-06 13:28:48.818500] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:25.505 13:28:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.505 13:28:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:25.505 13:28:48 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:25.505 13:28:48 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.505 13:28:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.505 ************************************ 00:04:25.505 START TEST scheduler_create_thread 00:04:25.505 ************************************ 00:04:25.505 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:25.505 13:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:25.505 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.505 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.505 2 00:04:25.505 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.505 13:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:25.505 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.505 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.765 3 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.765 4 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.765 5 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.765 6 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.765 7 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.765 8 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.765 9 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.765 13:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.336 10 00:04:26.336 13:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.336 13:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:26.336 13:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.336 13:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.718 13:28:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.719 13:28:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:27.719 13:28:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:27.719 13:28:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.719 13:28:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.289 13:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.289 13:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:28.289 13:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.289 13:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.975 13:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.975 13:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:28.975 13:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:28.975 13:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.975 13:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.916 13:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.916 00:04:29.916 real 0m4.225s 00:04:29.916 user 0m0.026s 00:04:29.916 sys 0m0.006s 00:04:29.916 13:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.916 13:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.916 ************************************ 00:04:29.916 END TEST scheduler_create_thread 00:04:29.916 ************************************ 00:04:29.916 13:28:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:29.916 13:28:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 406420 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 406420 ']' 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 406420 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 406420 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 406420' 00:04:29.916 killing process with pid 406420 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 406420 00:04:29.916 13:28:53 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 406420 00:04:30.177 [2024-11-06 13:28:53.363842] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:30.177 00:04:30.177 real 0m5.156s 00:04:30.177 user 0m10.275s 00:04:30.177 sys 0m0.365s 00:04:30.177 13:28:53 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.177 13:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.177 ************************************ 00:04:30.177 END TEST event_scheduler 00:04:30.177 ************************************ 00:04:30.437 13:28:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:30.437 13:28:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:30.437 13:28:53 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.437 13:28:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.437 13:28:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.437 ************************************ 00:04:30.437 START TEST app_repeat 00:04:30.437 ************************************ 00:04:30.437 13:28:53 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=407484 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 407484' 00:04:30.437 Process app_repeat pid: 407484 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:30.437 spdk_app_start Round 0 00:04:30.437 13:28:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 407484 /var/tmp/spdk-nbd.sock 00:04:30.437 13:28:53 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 407484 ']' 00:04:30.437 13:28:53 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.437 13:28:53 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:30.437 13:28:53 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.437 13:28:53 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:30.437 13:28:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.437 [2024-11-06 13:28:53.641441] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:30.437 [2024-11-06 13:28:53.641513] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407484 ] 00:04:30.437 [2024-11-06 13:28:53.715736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.437 [2024-11-06 13:28:53.756489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.437 [2024-11-06 13:28:53.756492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.698 13:28:53 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.698 13:28:53 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:30.698 13:28:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.698 Malloc0 00:04:30.698 13:28:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.958 Malloc1 00:04:30.958 13:28:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.958 13:28:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.218 /dev/nbd0 00:04:31.218 13:28:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.218 13:28:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.218 13:28:54 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:31.218 13:28:54 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.219 1+0 records in 00:04:31.219 1+0 records out 00:04:31.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242799 s, 16.9 MB/s 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:31.219 13:28:54 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:31.219 13:28:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.219 13:28:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.219 13:28:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.480 /dev/nbd1 00:04:31.480 13:28:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.480 13:28:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.480 13:28:54 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:31.480 13:28:54 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:31.480 13:28:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:31.480 13:28:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:31.480 13:28:54 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:31.480 13:28:54 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:31.480 13:28:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:31.480 13:28:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:31.480 13:28:54 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.480 1+0 records in 00:04:31.480 1+0 records out 00:04:31.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285928 s, 14.3 MB/s 00:04:31.481 13:28:54 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.481 13:28:54 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:31.481 13:28:54 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.481 13:28:54 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:31.481 13:28:54 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:31.481 13:28:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.481 13:28:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.481 13:28:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.481 13:28:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.481 13:28:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.481 13:28:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.481 { 00:04:31.481 "nbd_device": "/dev/nbd0", 00:04:31.481 "bdev_name": "Malloc0" 00:04:31.481 }, 00:04:31.481 { 00:04:31.481 "nbd_device": "/dev/nbd1", 00:04:31.481 "bdev_name": "Malloc1" 00:04:31.481 } 00:04:31.481 ]' 00:04:31.481 13:28:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.481 { 00:04:31.481 "nbd_device": "/dev/nbd0", 00:04:31.481 "bdev_name": "Malloc0" 00:04:31.481 }, 00:04:31.481 { 00:04:31.481 "nbd_device": "/dev/nbd1", 00:04:31.481 "bdev_name": "Malloc1" 00:04:31.481 } 00:04:31.481 ]' 00:04:31.481 13:28:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.742 /dev/nbd1' 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.742 /dev/nbd1' 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.742 256+0 records in 00:04:31.742 256+0 records out 00:04:31.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117326 s, 89.4 MB/s 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.742 256+0 records in 00:04:31.742 256+0 records out 00:04:31.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165508 s, 63.4 MB/s 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.742 256+0 records in 00:04:31.742 256+0 records out 00:04:31.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167873 s, 62.5 MB/s 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.742 13:28:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.002 13:28:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.262 13:28:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.262 13:28:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.522 13:28:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.522 [2024-11-06 13:28:55.851136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.522 [2024-11-06 13:28:55.886810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.522 [2024-11-06 13:28:55.886997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.783 [2024-11-06 13:28:55.918632] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.783 [2024-11-06 13:28:55.918668] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.090 13:28:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:36.090 13:28:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:36.090 spdk_app_start Round 1 00:04:36.090 13:28:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 407484 /var/tmp/spdk-nbd.sock 00:04:36.090 13:28:58 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 407484 ']' 00:04:36.090 13:28:58 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.090 13:28:58 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.090 13:28:58 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.090 13:28:58 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.090 13:28:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.090 13:28:58 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:36.090 13:28:58 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:36.090 13:28:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.090 Malloc0 00:04:36.090 13:28:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.090 Malloc1 00:04:36.090 13:28:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.090 13:28:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:36.090 /dev/nbd0 00:04:36.351 13:28:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:36.351 13:28:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.351 1+0 records in 00:04:36.351 1+0 records out 00:04:36.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198064 s, 20.7 MB/s 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:36.351 13:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.351 13:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.351 13:28:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.351 /dev/nbd1 00:04:36.351 13:28:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.351 13:28:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.351 1+0 records in 00:04:36.351 1+0 records out 00:04:36.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307836 s, 13.3 MB/s 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:36.351 13:28:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.612 13:28:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:36.612 13:28:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.612 { 00:04:36.612 "nbd_device": "/dev/nbd0", 00:04:36.612 "bdev_name": "Malloc0" 00:04:36.612 }, 00:04:36.612 { 00:04:36.612 "nbd_device": "/dev/nbd1", 00:04:36.612 "bdev_name": "Malloc1" 00:04:36.612 } 00:04:36.612 ]' 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.612 { 00:04:36.612 "nbd_device": "/dev/nbd0", 00:04:36.612 "bdev_name": "Malloc0" 00:04:36.612 }, 00:04:36.612 { 00:04:36.612 "nbd_device": "/dev/nbd1", 00:04:36.612 "bdev_name": "Malloc1" 00:04:36.612 } 00:04:36.612 ]' 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.612 /dev/nbd1' 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.612 /dev/nbd1' 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.612 256+0 records in 00:04:36.612 256+0 records out 00:04:36.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128007 s, 81.9 MB/s 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.612 13:28:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.873 256+0 records in 00:04:36.873 256+0 records out 00:04:36.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162105 s, 64.7 MB/s 00:04:36.873 13:28:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.873 13:28:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.873 256+0 records in 00:04:36.873 256+0 records out 00:04:36.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248222 s, 42.2 MB/s 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.873 13:29:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.133 13:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.133 13:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.133 13:29:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.134 13:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.134 13:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.134 13:29:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.134 13:29:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.134 13:29:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.134 13:29:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.134 13:29:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.134 13:29:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:37.394 13:29:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:37.395 13:29:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:37.395 13:29:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:37.395 13:29:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.655 13:29:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:37.655 [2024-11-06 13:29:00.927392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.655 [2024-11-06 13:29:00.962779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.655 [2024-11-06 13:29:00.962810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.655 [2024-11-06 13:29:00.995247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:37.655 [2024-11-06 13:29:00.995283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.953 13:29:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:40.953 13:29:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:40.953 spdk_app_start Round 2 00:04:40.953 13:29:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 407484 /var/tmp/spdk-nbd.sock 00:04:40.953 13:29:03 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 407484 ']' 00:04:40.953 13:29:03 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.953 13:29:03 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.953 13:29:03 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.953 13:29:03 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.953 13:29:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.953 13:29:03 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.953 13:29:03 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:40.954 13:29:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.954 Malloc0 00:04:40.954 13:29:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.954 Malloc1 00:04:40.954 13:29:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.954 13:29:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:41.214 /dev/nbd0 00:04:41.214 13:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:41.214 13:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:41.214 13:29:04 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:41.214 13:29:04 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:41.214 13:29:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.215 1+0 records in 00:04:41.215 1+0 records out 00:04:41.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449734 s, 9.1 MB/s 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:41.215 13:29:04 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:41.215 13:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.215 13:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.215 13:29:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.476 /dev/nbd1 00:04:41.476 13:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.476 13:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.476 1+0 records in 00:04:41.476 1+0 records out 00:04:41.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292093 s, 14.0 MB/s 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:41.476 13:29:04 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:41.476 13:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.476 13:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.476 13:29:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.476 13:29:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.476 13:29:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.737 { 00:04:41.737 "nbd_device": "/dev/nbd0", 00:04:41.737 "bdev_name": "Malloc0" 00:04:41.737 }, 00:04:41.737 { 00:04:41.737 "nbd_device": "/dev/nbd1", 00:04:41.737 "bdev_name": "Malloc1" 00:04:41.737 } 00:04:41.737 ]' 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.737 { 00:04:41.737 "nbd_device": "/dev/nbd0", 00:04:41.737 "bdev_name": "Malloc0" 00:04:41.737 }, 00:04:41.737 { 00:04:41.737 "nbd_device": "/dev/nbd1", 00:04:41.737 "bdev_name": "Malloc1" 00:04:41.737 } 00:04:41.737 ]' 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.737 /dev/nbd1' 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.737 /dev/nbd1' 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.737 13:29:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.737 256+0 records in 00:04:41.737 256+0 records out 00:04:41.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127175 s, 82.5 MB/s 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.737 256+0 records in 00:04:41.737 256+0 records out 00:04:41.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281344 s, 37.3 MB/s 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.737 256+0 records in 00:04:41.737 256+0 records out 00:04:41.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276105 s, 38.0 MB/s 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.737 13:29:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.998 13:29:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.259 13:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.519 13:29:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.519 13:29:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.519 13:29:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.780 [2024-11-06 13:29:05.990702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.780 [2024-11-06 13:29:06.026081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.780 [2024-11-06 13:29:06.026082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.780 [2024-11-06 13:29:06.057683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.780 [2024-11-06 13:29:06.057717] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:46.085 13:29:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 407484 /var/tmp/spdk-nbd.sock 00:04:46.085 13:29:08 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 407484 ']' 00:04:46.085 13:29:08 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.085 13:29:08 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.085 13:29:08 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.085 13:29:08 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.085 13:29:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:46.085 13:29:09 event.app_repeat -- event/event.sh@39 -- # killprocess 407484 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 407484 ']' 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 407484 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 407484 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 407484' 00:04:46.085 killing process with pid 407484 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@971 -- # kill 407484 00:04:46.085 13:29:09 event.app_repeat -- common/autotest_common.sh@976 -- # wait 407484 00:04:46.085 spdk_app_start is called in Round 0. 00:04:46.085 Shutdown signal received, stop current app iteration 00:04:46.085 Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 reinitialization... 00:04:46.086 spdk_app_start is called in Round 1. 00:04:46.086 Shutdown signal received, stop current app iteration 00:04:46.086 Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 reinitialization... 00:04:46.086 spdk_app_start is called in Round 2. 00:04:46.086 Shutdown signal received, stop current app iteration 00:04:46.086 Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 reinitialization... 00:04:46.086 spdk_app_start is called in Round 3. 00:04:46.086 Shutdown signal received, stop current app iteration 00:04:46.086 13:29:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:46.086 13:29:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:46.086 00:04:46.086 real 0m15.612s 00:04:46.086 user 0m34.020s 00:04:46.086 sys 0m2.259s 00:04:46.086 13:29:09 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.086 13:29:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.086 ************************************ 00:04:46.086 END TEST app_repeat 00:04:46.086 ************************************ 00:04:46.086 13:29:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:46.086 13:29:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:46.086 13:29:09 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.086 13:29:09 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.086 13:29:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.086 ************************************ 00:04:46.086 START TEST cpu_locks 00:04:46.086 ************************************ 00:04:46.086 13:29:09 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:46.086 * Looking for test storage... 00:04:46.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:46.086 13:29:09 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:46.086 13:29:09 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:46.086 13:29:09 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:46.349 13:29:09 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.349 13:29:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:46.349 13:29:09 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.349 13:29:09 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:46.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.349 --rc genhtml_branch_coverage=1 00:04:46.349 --rc genhtml_function_coverage=1 00:04:46.349 --rc genhtml_legend=1 00:04:46.349 --rc geninfo_all_blocks=1 00:04:46.349 --rc geninfo_unexecuted_blocks=1 00:04:46.349 00:04:46.349 ' 00:04:46.349 13:29:09 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:46.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.349 --rc genhtml_branch_coverage=1 00:04:46.349 --rc genhtml_function_coverage=1 00:04:46.349 --rc genhtml_legend=1 00:04:46.349 --rc geninfo_all_blocks=1 00:04:46.349 --rc geninfo_unexecuted_blocks=1 00:04:46.349 00:04:46.349 ' 00:04:46.349 13:29:09 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:46.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.349 --rc genhtml_branch_coverage=1 00:04:46.349 --rc genhtml_function_coverage=1 00:04:46.349 --rc genhtml_legend=1 00:04:46.349 --rc geninfo_all_blocks=1 00:04:46.349 --rc geninfo_unexecuted_blocks=1 00:04:46.349 00:04:46.349 ' 00:04:46.349 13:29:09 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:46.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.349 --rc genhtml_branch_coverage=1 00:04:46.349 --rc genhtml_function_coverage=1 00:04:46.349 --rc genhtml_legend=1 00:04:46.349 --rc geninfo_all_blocks=1 00:04:46.349 --rc geninfo_unexecuted_blocks=1 00:04:46.349 00:04:46.349 ' 00:04:46.349 13:29:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:46.349 13:29:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:46.349 13:29:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:46.349 13:29:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:46.349 13:29:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.349 13:29:09 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.349 13:29:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.349 ************************************ 00:04:46.349 START TEST default_locks 00:04:46.349 ************************************ 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=410749 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 410749 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 410749 ']' 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.349 13:29:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.349 [2024-11-06 13:29:09.579067] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:46.349 [2024-11-06 13:29:09.579131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410749 ] 00:04:46.349 [2024-11-06 13:29:09.657442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.349 [2024-11-06 13:29:09.701227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.293 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.293 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:47.293 13:29:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 410749 00:04:47.293 13:29:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 410749 00:04:47.293 13:29:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.554 lslocks: write error 00:04:47.554 13:29:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 410749 00:04:47.554 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 410749 ']' 00:04:47.554 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 410749 00:04:47.554 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:47.554 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:47.554 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 410749 00:04:47.815 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:47.815 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:47.815 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 410749' 00:04:47.815 killing process with pid 410749 00:04:47.815 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 410749 00:04:47.815 13:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 410749 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 410749 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 410749 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 410749 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 410749 ']' 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (410749) - No such process 00:04:47.815 ERROR: process (pid: 410749) is no longer running 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:47.815 00:04:47.815 real 0m1.642s 00:04:47.815 user 0m1.746s 00:04:47.815 sys 0m0.584s 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.815 13:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.815 ************************************ 00:04:47.815 END TEST default_locks 00:04:47.815 ************************************ 00:04:48.076 13:29:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:48.076 13:29:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.076 13:29:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.076 13:29:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.076 ************************************ 00:04:48.076 START TEST default_locks_via_rpc 00:04:48.076 ************************************ 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=411122 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 411122 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 411122 ']' 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.076 13:29:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.076 [2024-11-06 13:29:11.299629] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:48.076 [2024-11-06 13:29:11.299685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411122 ] 00:04:48.076 [2024-11-06 13:29:11.374222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.076 [2024-11-06 13:29:11.416139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 411122 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 411122 00:04:49.021 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 411122 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 411122 ']' 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 411122 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 411122 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 411122' 00:04:49.282 killing process with pid 411122 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 411122 00:04:49.282 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 411122 00:04:49.543 00:04:49.543 real 0m1.545s 00:04:49.543 user 0m1.656s 00:04:49.543 sys 0m0.528s 00:04:49.543 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.543 13:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.543 ************************************ 00:04:49.543 END TEST default_locks_via_rpc 00:04:49.543 ************************************ 00:04:49.543 13:29:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:49.543 13:29:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.543 13:29:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.543 13:29:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.543 ************************************ 00:04:49.543 START TEST non_locking_app_on_locked_coremask 00:04:49.543 ************************************ 00:04:49.543 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:49.543 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=411479 00:04:49.543 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 411479 /var/tmp/spdk.sock 00:04:49.543 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 411479 ']' 00:04:49.543 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.543 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.544 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.544 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.544 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.544 13:29:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.544 [2024-11-06 13:29:12.903740] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:49.544 [2024-11-06 13:29:12.903803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411479 ] 00:04:49.803 [2024-11-06 13:29:12.976636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.803 [2024-11-06 13:29:13.015513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.373 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.373 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:50.373 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=411811 00:04:50.373 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 411811 /var/tmp/spdk2.sock 00:04:50.374 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 411811 ']' 00:04:50.374 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.374 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.374 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.374 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.374 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.374 13:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:50.374 [2024-11-06 13:29:13.735113] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:50.374 [2024-11-06 13:29:13.735167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411811 ] 00:04:50.634 [2024-11-06 13:29:13.846984] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.634 [2024-11-06 13:29:13.847013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.634 [2024-11-06 13:29:13.919449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.204 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.204 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:51.204 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 411479 00:04:51.204 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 411479 00:04:51.205 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.465 lslocks: write error 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 411479 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 411479 ']' 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 411479 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 411479 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 411479' 00:04:51.465 killing process with pid 411479 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 411479 00:04:51.465 13:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 411479 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 411811 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 411811 ']' 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 411811 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 411811 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 411811' 00:04:52.036 killing process with pid 411811 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 411811 00:04:52.036 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 411811 00:04:52.297 00:04:52.297 real 0m2.648s 00:04:52.297 user 0m2.954s 00:04:52.297 sys 0m0.758s 00:04:52.297 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.297 13:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.297 ************************************ 00:04:52.297 END TEST non_locking_app_on_locked_coremask 00:04:52.297 ************************************ 00:04:52.297 13:29:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:52.297 13:29:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.297 13:29:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.297 13:29:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.297 ************************************ 00:04:52.297 START TEST locking_app_on_unlocked_coremask 00:04:52.297 ************************************ 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=412185 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 412185 /var/tmp/spdk.sock 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 412185 ']' 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.297 13:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.297 [2024-11-06 13:29:15.628313] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:52.297 [2024-11-06 13:29:15.628370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412185 ] 00:04:52.558 [2024-11-06 13:29:15.700328] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.558 [2024-11-06 13:29:15.700358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.558 [2024-11-06 13:29:15.738141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=412243 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 412243 /var/tmp/spdk2.sock 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 412243 ']' 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.129 13:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.129 [2024-11-06 13:29:16.454079] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:53.129 [2024-11-06 13:29:16.454123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412243 ] 00:04:53.390 [2024-11-06 13:29:16.555800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.390 [2024-11-06 13:29:16.631263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.962 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.962 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:53.962 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 412243 00:04:53.962 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 412243 00:04:53.962 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.533 lslocks: write error 00:04:54.533 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 412185 00:04:54.533 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 412185 ']' 00:04:54.533 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 412185 00:04:54.533 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:54.533 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:54.533 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 412185 00:04:54.794 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:54.794 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:54.794 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 412185' 00:04:54.794 killing process with pid 412185 00:04:54.794 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 412185 00:04:54.794 13:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 412185 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 412243 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 412243 ']' 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 412243 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 412243 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 412243' 00:04:55.056 killing process with pid 412243 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 412243 00:04:55.056 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 412243 00:04:55.317 00:04:55.317 real 0m3.026s 00:04:55.317 user 0m3.363s 00:04:55.317 sys 0m0.864s 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.317 ************************************ 00:04:55.317 END TEST locking_app_on_unlocked_coremask 00:04:55.317 ************************************ 00:04:55.317 13:29:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:55.317 13:29:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.317 13:29:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.317 13:29:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.317 ************************************ 00:04:55.317 START TEST locking_app_on_locked_coremask 00:04:55.317 ************************************ 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=412896 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 412896 /var/tmp/spdk.sock 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 412896 ']' 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.317 13:29:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.578 [2024-11-06 13:29:18.727869] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:55.578 [2024-11-06 13:29:18.727920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412896 ] 00:04:55.578 [2024-11-06 13:29:18.798912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.578 [2024-11-06 13:29:18.836566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=412914 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 412914 /var/tmp/spdk2.sock 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 412914 /var/tmp/spdk2.sock 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 412914 /var/tmp/spdk2.sock 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 412914 ']' 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.149 13:29:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.410 [2024-11-06 13:29:19.563154] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:56.410 [2024-11-06 13:29:19.563209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412914 ] 00:04:56.410 [2024-11-06 13:29:19.676133] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 412896 has claimed it. 00:04:56.410 [2024-11-06 13:29:19.676175] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:56.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (412914) - No such process 00:04:56.980 ERROR: process (pid: 412914) is no longer running 00:04:56.980 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.980 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:56.980 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:56.980 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.980 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.980 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.980 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 412896 00:04:56.980 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 412896 00:04:56.980 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.550 lslocks: write error 00:04:57.550 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 412896 00:04:57.550 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 412896 ']' 00:04:57.550 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 412896 00:04:57.550 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:57.550 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.551 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 412896 00:04:57.551 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:57.551 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:57.551 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 412896' 00:04:57.551 killing process with pid 412896 00:04:57.551 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 412896 00:04:57.551 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 412896 00:04:57.811 00:04:57.811 real 0m2.258s 00:04:57.811 user 0m2.522s 00:04:57.811 sys 0m0.638s 00:04:57.811 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.811 13:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.811 ************************************ 00:04:57.811 END TEST locking_app_on_locked_coremask 00:04:57.811 ************************************ 00:04:57.811 13:29:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:57.811 13:29:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.811 13:29:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.811 13:29:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.811 ************************************ 00:04:57.811 START TEST locking_overlapped_coremask 00:04:57.811 ************************************ 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=413275 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 413275 /var/tmp/spdk.sock 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 413275 ']' 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.811 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.811 [2024-11-06 13:29:21.062819] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:57.811 [2024-11-06 13:29:21.062870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413275 ] 00:04:57.811 [2024-11-06 13:29:21.134828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.811 [2024-11-06 13:29:21.172114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.811 [2024-11-06 13:29:21.172226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.811 [2024-11-06 13:29:21.172229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=413507 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 413507 /var/tmp/spdk2.sock 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 413507 /var/tmp/spdk2.sock 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 413507 /var/tmp/spdk2.sock 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 413507 ']' 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.753 13:29:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.753 [2024-11-06 13:29:21.916628] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:58.753 [2024-11-06 13:29:21.916682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413507 ] 00:04:58.753 [2024-11-06 13:29:22.005144] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 413275 has claimed it. 00:04:58.753 [2024-11-06 13:29:22.005179] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (413507) - No such process 00:04:59.324 ERROR: process (pid: 413507) is no longer running 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 413275 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 413275 ']' 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 413275 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 413275 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 413275' 00:04:59.324 killing process with pid 413275 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 413275 00:04:59.324 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 413275 00:04:59.584 00:04:59.584 real 0m1.797s 00:04:59.584 user 0m5.209s 00:04:59.584 sys 0m0.385s 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.584 ************************************ 00:04:59.584 END TEST locking_overlapped_coremask 00:04:59.584 ************************************ 00:04:59.584 13:29:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:59.584 13:29:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.584 13:29:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.584 13:29:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.584 ************************************ 00:04:59.584 START TEST locking_overlapped_coremask_via_rpc 00:04:59.584 ************************************ 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=413649 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 413649 /var/tmp/spdk.sock 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 413649 ']' 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.584 13:29:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.584 [2024-11-06 13:29:22.935791] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:04:59.584 [2024-11-06 13:29:22.935838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413649 ] 00:04:59.844 [2024-11-06 13:29:23.006825] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.845 [2024-11-06 13:29:23.006855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.845 [2024-11-06 13:29:23.044442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.845 [2024-11-06 13:29:23.044553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.845 [2024-11-06 13:29:23.044556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=413974 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 413974 /var/tmp/spdk2.sock 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 413974 ']' 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.415 13:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.415 [2024-11-06 13:29:23.789653] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:05:00.415 [2024-11-06 13:29:23.789709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413974 ] 00:05:00.676 [2024-11-06 13:29:23.877067] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.676 [2024-11-06 13:29:23.877095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.676 [2024-11-06 13:29:23.940457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.676 [2024-11-06 13:29:23.943867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.676 [2024-11-06 13:29:23.943869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.249 [2024-11-06 13:29:24.600811] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 413649 has claimed it. 00:05:01.249 request: 00:05:01.249 { 00:05:01.249 "method": "framework_enable_cpumask_locks", 00:05:01.249 "req_id": 1 00:05:01.249 } 00:05:01.249 Got JSON-RPC error response 00:05:01.249 response: 00:05:01.249 { 00:05:01.249 "code": -32603, 00:05:01.249 "message": "Failed to claim CPU core: 2" 00:05:01.249 } 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 413649 /var/tmp/spdk.sock 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 413649 ']' 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.249 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.510 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.510 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:01.510 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 413974 /var/tmp/spdk2.sock 00:05:01.510 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 413974 ']' 00:05:01.510 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.510 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.511 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.511 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.511 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.772 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.772 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:01.772 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:01.772 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.772 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.772 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.772 00:05:01.772 real 0m2.096s 00:05:01.772 user 0m0.851s 00:05:01.772 sys 0m0.165s 00:05:01.772 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.772 13:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.772 ************************************ 00:05:01.772 END TEST locking_overlapped_coremask_via_rpc 00:05:01.772 ************************************ 00:05:01.772 13:29:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:01.772 13:29:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 413649 ]] 00:05:01.772 13:29:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 413649 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 413649 ']' 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 413649 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 413649 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 413649' 00:05:01.772 killing process with pid 413649 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 413649 00:05:01.772 13:29:25 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 413649 00:05:02.034 13:29:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 413974 ]] 00:05:02.034 13:29:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 413974 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 413974 ']' 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 413974 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 413974 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 413974' 00:05:02.034 killing process with pid 413974 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 413974 00:05:02.034 13:29:25 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 413974 00:05:02.295 13:29:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.295 13:29:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:02.295 13:29:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 413649 ]] 00:05:02.295 13:29:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 413649 00:05:02.295 13:29:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 413649 ']' 00:05:02.296 13:29:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 413649 00:05:02.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (413649) - No such process 00:05:02.296 13:29:25 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 413649 is not found' 00:05:02.296 Process with pid 413649 is not found 00:05:02.296 13:29:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 413974 ]] 00:05:02.296 13:29:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 413974 00:05:02.296 13:29:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 413974 ']' 00:05:02.296 13:29:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 413974 00:05:02.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (413974) - No such process 00:05:02.296 13:29:25 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 413974 is not found' 00:05:02.296 Process with pid 413974 is not found 00:05:02.296 13:29:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.296 00:05:02.296 real 0m16.256s 00:05:02.296 user 0m28.546s 00:05:02.296 sys 0m4.829s 00:05:02.296 13:29:25 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.296 13:29:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.296 ************************************ 00:05:02.296 END TEST cpu_locks 00:05:02.296 ************************************ 00:05:02.296 00:05:02.296 real 0m41.207s 00:05:02.296 user 1m19.427s 00:05:02.296 sys 0m8.080s 00:05:02.296 13:29:25 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.296 13:29:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.296 ************************************ 00:05:02.296 END TEST event 00:05:02.296 ************************************ 00:05:02.296 13:29:25 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.296 13:29:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.296 13:29:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.296 13:29:25 -- common/autotest_common.sh@10 -- # set +x 00:05:02.296 ************************************ 00:05:02.296 START TEST thread 00:05:02.296 ************************************ 00:05:02.296 13:29:25 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.557 * Looking for test storage... 00:05:02.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:02.557 13:29:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.557 13:29:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.557 13:29:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.557 13:29:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.557 13:29:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.557 13:29:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.557 13:29:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.557 13:29:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.557 13:29:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.557 13:29:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.557 13:29:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.557 13:29:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:02.557 13:29:25 thread -- scripts/common.sh@345 -- # : 1 00:05:02.557 13:29:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.557 13:29:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.557 13:29:25 thread -- scripts/common.sh@365 -- # decimal 1 00:05:02.557 13:29:25 thread -- scripts/common.sh@353 -- # local d=1 00:05:02.557 13:29:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.557 13:29:25 thread -- scripts/common.sh@355 -- # echo 1 00:05:02.557 13:29:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.557 13:29:25 thread -- scripts/common.sh@366 -- # decimal 2 00:05:02.557 13:29:25 thread -- scripts/common.sh@353 -- # local d=2 00:05:02.557 13:29:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.557 13:29:25 thread -- scripts/common.sh@355 -- # echo 2 00:05:02.557 13:29:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.557 13:29:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.557 13:29:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.557 13:29:25 thread -- scripts/common.sh@368 -- # return 0 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:02.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.557 --rc genhtml_branch_coverage=1 00:05:02.557 --rc genhtml_function_coverage=1 00:05:02.557 --rc genhtml_legend=1 00:05:02.557 --rc geninfo_all_blocks=1 00:05:02.557 --rc geninfo_unexecuted_blocks=1 00:05:02.557 00:05:02.557 ' 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:02.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.557 --rc genhtml_branch_coverage=1 00:05:02.557 --rc genhtml_function_coverage=1 00:05:02.557 --rc genhtml_legend=1 00:05:02.557 --rc geninfo_all_blocks=1 00:05:02.557 --rc geninfo_unexecuted_blocks=1 00:05:02.557 00:05:02.557 ' 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:02.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.557 --rc genhtml_branch_coverage=1 00:05:02.557 --rc genhtml_function_coverage=1 00:05:02.557 --rc genhtml_legend=1 00:05:02.557 --rc geninfo_all_blocks=1 00:05:02.557 --rc geninfo_unexecuted_blocks=1 00:05:02.557 00:05:02.557 ' 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:02.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.557 --rc genhtml_branch_coverage=1 00:05:02.557 --rc genhtml_function_coverage=1 00:05:02.557 --rc genhtml_legend=1 00:05:02.557 --rc geninfo_all_blocks=1 00:05:02.557 --rc geninfo_unexecuted_blocks=1 00:05:02.557 00:05:02.557 ' 00:05:02.557 13:29:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.557 13:29:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.557 ************************************ 00:05:02.557 START TEST thread_poller_perf 00:05:02.557 ************************************ 00:05:02.558 13:29:25 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.558 [2024-11-06 13:29:25.917998] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:05:02.558 [2024-11-06 13:29:25.918123] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414431 ] 00:05:02.818 [2024-11-06 13:29:25.996679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.818 [2024-11-06 13:29:26.038197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.818 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:03.899 [2024-11-06T12:29:27.275Z] ====================================== 00:05:03.899 [2024-11-06T12:29:27.275Z] busy:2412543954 (cyc) 00:05:03.899 [2024-11-06T12:29:27.275Z] total_run_count: 287000 00:05:03.899 [2024-11-06T12:29:27.275Z] tsc_hz: 2400000000 (cyc) 00:05:03.899 [2024-11-06T12:29:27.275Z] ====================================== 00:05:03.899 [2024-11-06T12:29:27.275Z] poller_cost: 8406 (cyc), 3502 (nsec) 00:05:03.899 00:05:03.899 real 0m1.185s 00:05:03.899 user 0m1.112s 00:05:03.899 sys 0m0.069s 00:05:03.899 13:29:27 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.899 13:29:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.899 ************************************ 00:05:03.899 END TEST thread_poller_perf 00:05:03.899 ************************************ 00:05:03.899 13:29:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.899 13:29:27 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:03.899 13:29:27 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.899 13:29:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.899 ************************************ 00:05:03.899 START TEST thread_poller_perf 00:05:03.899 ************************************ 00:05:03.899 13:29:27 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.899 [2024-11-06 13:29:27.180281] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:05:03.899 [2024-11-06 13:29:27.180390] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414787 ] 00:05:03.899 [2024-11-06 13:29:27.255765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.194 [2024-11-06 13:29:27.294927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.194 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:05.248 [2024-11-06T12:29:28.624Z] ====================================== 00:05:05.248 [2024-11-06T12:29:28.624Z] busy:2401798152 (cyc) 00:05:05.248 [2024-11-06T12:29:28.624Z] total_run_count: 3807000 00:05:05.248 [2024-11-06T12:29:28.624Z] tsc_hz: 2400000000 (cyc) 00:05:05.248 [2024-11-06T12:29:28.624Z] ====================================== 00:05:05.248 [2024-11-06T12:29:28.624Z] poller_cost: 630 (cyc), 262 (nsec) 00:05:05.248 00:05:05.248 real 0m1.169s 00:05:05.248 user 0m1.101s 00:05:05.248 sys 0m0.064s 00:05:05.248 13:29:28 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.248 13:29:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.248 ************************************ 00:05:05.248 END TEST thread_poller_perf 00:05:05.248 ************************************ 00:05:05.248 13:29:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:05.248 00:05:05.248 real 0m2.700s 00:05:05.248 user 0m2.374s 00:05:05.248 sys 0m0.337s 00:05:05.248 13:29:28 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.248 13:29:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.248 ************************************ 00:05:05.248 END TEST thread 00:05:05.248 ************************************ 00:05:05.248 13:29:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:05.248 13:29:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:05.248 13:29:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.248 13:29:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.248 13:29:28 -- common/autotest_common.sh@10 -- # set +x 00:05:05.248 ************************************ 00:05:05.248 START TEST app_cmdline 00:05:05.248 ************************************ 00:05:05.249 13:29:28 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:05.249 * Looking for test storage... 00:05:05.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:05.249 13:29:28 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.249 13:29:28 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.249 13:29:28 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.529 13:29:28 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:05.529 13:29:28 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.530 13:29:28 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.530 --rc genhtml_branch_coverage=1 00:05:05.530 --rc genhtml_function_coverage=1 00:05:05.530 --rc genhtml_legend=1 00:05:05.530 --rc geninfo_all_blocks=1 00:05:05.530 --rc geninfo_unexecuted_blocks=1 00:05:05.530 00:05:05.530 ' 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.530 --rc genhtml_branch_coverage=1 00:05:05.530 --rc genhtml_function_coverage=1 00:05:05.530 --rc genhtml_legend=1 00:05:05.530 --rc geninfo_all_blocks=1 00:05:05.530 --rc geninfo_unexecuted_blocks=1 00:05:05.530 00:05:05.530 ' 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.530 --rc genhtml_branch_coverage=1 00:05:05.530 --rc genhtml_function_coverage=1 00:05:05.530 --rc genhtml_legend=1 00:05:05.530 --rc geninfo_all_blocks=1 00:05:05.530 --rc geninfo_unexecuted_blocks=1 00:05:05.530 00:05:05.530 ' 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.530 --rc genhtml_branch_coverage=1 00:05:05.530 --rc genhtml_function_coverage=1 00:05:05.530 --rc genhtml_legend=1 00:05:05.530 --rc geninfo_all_blocks=1 00:05:05.530 --rc geninfo_unexecuted_blocks=1 00:05:05.530 00:05:05.530 ' 00:05:05.530 13:29:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:05.530 13:29:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=415092 00:05:05.530 13:29:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 415092 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 415092 ']' 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.530 13:29:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:05.530 13:29:28 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:05.530 [2024-11-06 13:29:28.700671] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:05:05.530 [2024-11-06 13:29:28.700740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415092 ] 00:05:05.530 [2024-11-06 13:29:28.778144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.530 [2024-11-06 13:29:28.820708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.163 13:29:29 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.163 13:29:29 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:06.163 13:29:29 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:06.483 { 00:05:06.483 "version": "SPDK v25.01-pre git sha1 cfcfe6c3e", 00:05:06.483 "fields": { 00:05:06.483 "major": 25, 00:05:06.483 "minor": 1, 00:05:06.483 "patch": 0, 00:05:06.483 "suffix": "-pre", 00:05:06.483 "commit": "cfcfe6c3e" 00:05:06.483 } 00:05:06.483 } 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:06.483 13:29:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:06.483 13:29:29 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:06.745 request: 00:05:06.745 { 00:05:06.745 "method": "env_dpdk_get_mem_stats", 00:05:06.745 "req_id": 1 00:05:06.745 } 00:05:06.745 Got JSON-RPC error response 00:05:06.745 response: 00:05:06.745 { 00:05:06.745 "code": -32601, 00:05:06.745 "message": "Method not found" 00:05:06.745 } 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:06.745 13:29:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 415092 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 415092 ']' 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 415092 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 415092 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 415092' 00:05:06.745 killing process with pid 415092 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@971 -- # kill 415092 00:05:06.745 13:29:29 app_cmdline -- common/autotest_common.sh@976 -- # wait 415092 00:05:07.005 00:05:07.006 real 0m1.735s 00:05:07.006 user 0m2.086s 00:05:07.006 sys 0m0.447s 00:05:07.006 13:29:30 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.006 13:29:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:07.006 ************************************ 00:05:07.006 END TEST app_cmdline 00:05:07.006 ************************************ 00:05:07.006 13:29:30 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:07.006 13:29:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.006 13:29:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.006 13:29:30 -- common/autotest_common.sh@10 -- # set +x 00:05:07.006 ************************************ 00:05:07.006 START TEST version 00:05:07.006 ************************************ 00:05:07.006 13:29:30 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:07.006 * Looking for test storage... 00:05:07.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:07.006 13:29:30 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.006 13:29:30 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.006 13:29:30 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.267 13:29:30 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.267 13:29:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.267 13:29:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.267 13:29:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.267 13:29:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.267 13:29:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.267 13:29:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.267 13:29:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.267 13:29:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.267 13:29:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.267 13:29:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.267 13:29:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.267 13:29:30 version -- scripts/common.sh@344 -- # case "$op" in 00:05:07.267 13:29:30 version -- scripts/common.sh@345 -- # : 1 00:05:07.267 13:29:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.267 13:29:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.267 13:29:30 version -- scripts/common.sh@365 -- # decimal 1 00:05:07.267 13:29:30 version -- scripts/common.sh@353 -- # local d=1 00:05:07.267 13:29:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.267 13:29:30 version -- scripts/common.sh@355 -- # echo 1 00:05:07.267 13:29:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.267 13:29:30 version -- scripts/common.sh@366 -- # decimal 2 00:05:07.267 13:29:30 version -- scripts/common.sh@353 -- # local d=2 00:05:07.267 13:29:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.267 13:29:30 version -- scripts/common.sh@355 -- # echo 2 00:05:07.267 13:29:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.267 13:29:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.267 13:29:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.267 13:29:30 version -- scripts/common.sh@368 -- # return 0 00:05:07.267 13:29:30 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.267 13:29:30 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.267 --rc genhtml_branch_coverage=1 00:05:07.267 --rc genhtml_function_coverage=1 00:05:07.267 --rc genhtml_legend=1 00:05:07.267 --rc geninfo_all_blocks=1 00:05:07.267 --rc geninfo_unexecuted_blocks=1 00:05:07.267 00:05:07.267 ' 00:05:07.267 13:29:30 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.267 --rc genhtml_branch_coverage=1 00:05:07.267 --rc genhtml_function_coverage=1 00:05:07.267 --rc genhtml_legend=1 00:05:07.267 --rc geninfo_all_blocks=1 00:05:07.267 --rc geninfo_unexecuted_blocks=1 00:05:07.267 00:05:07.267 ' 00:05:07.267 13:29:30 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.267 --rc genhtml_branch_coverage=1 00:05:07.267 --rc genhtml_function_coverage=1 00:05:07.267 --rc genhtml_legend=1 00:05:07.267 --rc geninfo_all_blocks=1 00:05:07.267 --rc geninfo_unexecuted_blocks=1 00:05:07.267 00:05:07.267 ' 00:05:07.267 13:29:30 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.267 --rc genhtml_branch_coverage=1 00:05:07.267 --rc genhtml_function_coverage=1 00:05:07.267 --rc genhtml_legend=1 00:05:07.267 --rc geninfo_all_blocks=1 00:05:07.267 --rc geninfo_unexecuted_blocks=1 00:05:07.267 00:05:07.267 ' 00:05:07.267 13:29:30 version -- app/version.sh@17 -- # get_header_version major 00:05:07.267 13:29:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.267 13:29:30 version -- app/version.sh@14 -- # cut -f2 00:05:07.267 13:29:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.267 13:29:30 version -- app/version.sh@17 -- # major=25 00:05:07.267 13:29:30 version -- app/version.sh@18 -- # get_header_version minor 00:05:07.267 13:29:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.267 13:29:30 version -- app/version.sh@14 -- # cut -f2 00:05:07.267 13:29:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.267 13:29:30 version -- app/version.sh@18 -- # minor=1 00:05:07.267 13:29:30 version -- app/version.sh@19 -- # get_header_version patch 00:05:07.267 13:29:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.267 13:29:30 version -- app/version.sh@14 -- # cut -f2 00:05:07.267 13:29:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.267 13:29:30 version -- app/version.sh@19 -- # patch=0 00:05:07.267 13:29:30 version -- app/version.sh@20 -- # get_header_version suffix 00:05:07.267 13:29:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.267 13:29:30 version -- app/version.sh@14 -- # cut -f2 00:05:07.268 13:29:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.268 13:29:30 version -- app/version.sh@20 -- # suffix=-pre 00:05:07.268 13:29:30 version -- app/version.sh@22 -- # version=25.1 00:05:07.268 13:29:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:07.268 13:29:30 version -- app/version.sh@28 -- # version=25.1rc0 00:05:07.268 13:29:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:07.268 13:29:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:07.268 13:29:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:07.268 13:29:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:07.268 00:05:07.268 real 0m0.285s 00:05:07.268 user 0m0.166s 00:05:07.268 sys 0m0.164s 00:05:07.268 13:29:30 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.268 13:29:30 version -- common/autotest_common.sh@10 -- # set +x 00:05:07.268 ************************************ 00:05:07.268 END TEST version 00:05:07.268 ************************************ 00:05:07.268 13:29:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:07.268 13:29:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:07.268 13:29:30 -- spdk/autotest.sh@194 -- # uname -s 00:05:07.268 13:29:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:07.268 13:29:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:07.268 13:29:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:07.268 13:29:30 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:07.268 13:29:30 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:07.268 13:29:30 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:07.268 13:29:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:07.268 13:29:30 -- common/autotest_common.sh@10 -- # set +x 00:05:07.268 13:29:30 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:07.268 13:29:30 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:07.268 13:29:30 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:07.268 13:29:30 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:07.268 13:29:30 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:07.268 13:29:30 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:07.268 13:29:30 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:07.268 13:29:30 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:07.268 13:29:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.268 13:29:30 -- common/autotest_common.sh@10 -- # set +x 00:05:07.528 ************************************ 00:05:07.528 START TEST nvmf_tcp 00:05:07.528 ************************************ 00:05:07.528 13:29:30 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:07.528 * Looking for test storage... 00:05:07.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:07.528 13:29:30 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.528 13:29:30 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.528 13:29:30 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.528 13:29:30 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.528 13:29:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.529 13:29:30 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:07.529 13:29:30 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.529 13:29:30 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.529 --rc genhtml_branch_coverage=1 00:05:07.529 --rc genhtml_function_coverage=1 00:05:07.529 --rc genhtml_legend=1 00:05:07.529 --rc geninfo_all_blocks=1 00:05:07.529 --rc geninfo_unexecuted_blocks=1 00:05:07.529 00:05:07.529 ' 00:05:07.529 13:29:30 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.529 --rc genhtml_branch_coverage=1 00:05:07.529 --rc genhtml_function_coverage=1 00:05:07.529 --rc genhtml_legend=1 00:05:07.529 --rc geninfo_all_blocks=1 00:05:07.529 --rc geninfo_unexecuted_blocks=1 00:05:07.529 00:05:07.529 ' 00:05:07.529 13:29:30 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.529 --rc genhtml_branch_coverage=1 00:05:07.529 --rc genhtml_function_coverage=1 00:05:07.529 --rc genhtml_legend=1 00:05:07.529 --rc geninfo_all_blocks=1 00:05:07.529 --rc geninfo_unexecuted_blocks=1 00:05:07.529 00:05:07.529 ' 00:05:07.529 13:29:30 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.529 --rc genhtml_branch_coverage=1 00:05:07.529 --rc genhtml_function_coverage=1 00:05:07.529 --rc genhtml_legend=1 00:05:07.529 --rc geninfo_all_blocks=1 00:05:07.529 --rc geninfo_unexecuted_blocks=1 00:05:07.529 00:05:07.529 ' 00:05:07.529 13:29:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:07.529 13:29:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:07.529 13:29:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:07.529 13:29:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:07.529 13:29:30 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.529 13:29:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:07.790 ************************************ 00:05:07.790 START TEST nvmf_target_core 00:05:07.790 ************************************ 00:05:07.790 13:29:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:07.790 * Looking for test storage... 00:05:07.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.790 --rc genhtml_branch_coverage=1 00:05:07.790 --rc genhtml_function_coverage=1 00:05:07.790 --rc genhtml_legend=1 00:05:07.790 --rc geninfo_all_blocks=1 00:05:07.790 --rc geninfo_unexecuted_blocks=1 00:05:07.790 00:05:07.790 ' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.790 --rc genhtml_branch_coverage=1 00:05:07.790 --rc genhtml_function_coverage=1 00:05:07.790 --rc genhtml_legend=1 00:05:07.790 --rc geninfo_all_blocks=1 00:05:07.790 --rc geninfo_unexecuted_blocks=1 00:05:07.790 00:05:07.790 ' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.790 --rc genhtml_branch_coverage=1 00:05:07.790 --rc genhtml_function_coverage=1 00:05:07.790 --rc genhtml_legend=1 00:05:07.790 --rc geninfo_all_blocks=1 00:05:07.790 --rc geninfo_unexecuted_blocks=1 00:05:07.790 00:05:07.790 ' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.790 --rc genhtml_branch_coverage=1 00:05:07.790 --rc genhtml_function_coverage=1 00:05:07.790 --rc genhtml_legend=1 00:05:07.790 --rc geninfo_all_blocks=1 00:05:07.790 --rc geninfo_unexecuted_blocks=1 00:05:07.790 00:05:07.790 ' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:07.790 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:07.791 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.791 13:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:08.052 ************************************ 00:05:08.052 START TEST nvmf_abort 00:05:08.052 ************************************ 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:08.052 * Looking for test storage... 00:05:08.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.052 --rc genhtml_branch_coverage=1 00:05:08.052 --rc genhtml_function_coverage=1 00:05:08.052 --rc genhtml_legend=1 00:05:08.052 --rc geninfo_all_blocks=1 00:05:08.052 --rc geninfo_unexecuted_blocks=1 00:05:08.052 00:05:08.052 ' 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.052 --rc genhtml_branch_coverage=1 00:05:08.052 --rc genhtml_function_coverage=1 00:05:08.052 --rc genhtml_legend=1 00:05:08.052 --rc geninfo_all_blocks=1 00:05:08.052 --rc geninfo_unexecuted_blocks=1 00:05:08.052 00:05:08.052 ' 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.052 --rc genhtml_branch_coverage=1 00:05:08.052 --rc genhtml_function_coverage=1 00:05:08.052 --rc genhtml_legend=1 00:05:08.052 --rc geninfo_all_blocks=1 00:05:08.052 --rc geninfo_unexecuted_blocks=1 00:05:08.052 00:05:08.052 ' 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.052 --rc genhtml_branch_coverage=1 00:05:08.052 --rc genhtml_function_coverage=1 00:05:08.052 --rc genhtml_legend=1 00:05:08.052 --rc geninfo_all_blocks=1 00:05:08.052 --rc geninfo_unexecuted_blocks=1 00:05:08.052 00:05:08.052 ' 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.052 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:08.053 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:08.313 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:08.313 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:08.313 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:08.313 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:16.451 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:16.452 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:16.452 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:16.452 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:16.452 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:16.452 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:16.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:16.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:05:16.453 00:05:16.453 --- 10.0.0.2 ping statistics --- 00:05:16.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.453 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:16.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:16.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:05:16.453 00:05:16.453 --- 10.0.0.1 ping statistics --- 00:05:16.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.453 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=419412 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 419412 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 419412 ']' 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.453 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.453 [2024-11-06 13:29:38.837816] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:05:16.453 [2024-11-06 13:29:38.837882] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:16.453 [2024-11-06 13:29:38.955291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.453 [2024-11-06 13:29:39.010918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:16.453 [2024-11-06 13:29:39.010967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:16.453 [2024-11-06 13:29:39.010975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.453 [2024-11-06 13:29:39.010983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.453 [2024-11-06 13:29:39.010989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:16.453 [2024-11-06 13:29:39.012767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.453 [2024-11-06 13:29:39.016778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.453 [2024-11-06 13:29:39.016805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.453 [2024-11-06 13:29:39.702361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.453 Malloc0 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.453 Delay0 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.453 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.453 [2024-11-06 13:29:39.759782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:16.454 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.454 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:16.454 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.454 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.454 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.454 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:16.714 [2024-11-06 13:29:39.928887] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:18.627 Initializing NVMe Controllers 00:05:18.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:18.627 controller IO queue size 128 less than required 00:05:18.627 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:18.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:18.627 Initialization complete. Launching workers. 00:05:18.627 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28915 00:05:18.627 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28976, failed to submit 62 00:05:18.627 success 28919, unsuccessful 57, failed 0 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:18.627 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:18.627 rmmod nvme_tcp 00:05:18.627 rmmod nvme_fabrics 00:05:18.888 rmmod nvme_keyring 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 419412 ']' 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 419412 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 419412 ']' 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 419412 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 419412 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 419412' 00:05:18.888 killing process with pid 419412 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 419412 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 419412 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:18.888 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:21.437 00:05:21.437 real 0m13.116s 00:05:21.437 user 0m13.620s 00:05:21.437 sys 0m6.409s 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.437 ************************************ 00:05:21.437 END TEST nvmf_abort 00:05:21.437 ************************************ 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:21.437 ************************************ 00:05:21.437 START TEST nvmf_ns_hotplug_stress 00:05:21.437 ************************************ 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.437 * Looking for test storage... 00:05:21.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:21.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.437 --rc genhtml_branch_coverage=1 00:05:21.437 --rc genhtml_function_coverage=1 00:05:21.437 --rc genhtml_legend=1 00:05:21.437 --rc geninfo_all_blocks=1 00:05:21.437 --rc geninfo_unexecuted_blocks=1 00:05:21.437 00:05:21.437 ' 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:21.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.437 --rc genhtml_branch_coverage=1 00:05:21.437 --rc genhtml_function_coverage=1 00:05:21.437 --rc genhtml_legend=1 00:05:21.437 --rc geninfo_all_blocks=1 00:05:21.437 --rc geninfo_unexecuted_blocks=1 00:05:21.437 00:05:21.437 ' 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:21.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.437 --rc genhtml_branch_coverage=1 00:05:21.437 --rc genhtml_function_coverage=1 00:05:21.437 --rc genhtml_legend=1 00:05:21.437 --rc geninfo_all_blocks=1 00:05:21.437 --rc geninfo_unexecuted_blocks=1 00:05:21.437 00:05:21.437 ' 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:21.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.437 --rc genhtml_branch_coverage=1 00:05:21.437 --rc genhtml_function_coverage=1 00:05:21.437 --rc genhtml_legend=1 00:05:21.437 --rc geninfo_all_blocks=1 00:05:21.437 --rc geninfo_unexecuted_blocks=1 00:05:21.437 00:05:21.437 ' 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.437 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:21.438 13:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:29.577 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:29.577 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:29.577 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:29.578 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:29.578 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:29.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:29.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:05:29.578 00:05:29.578 --- 10.0.0.2 ping statistics --- 00:05:29.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.578 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:29.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:29.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:05:29.578 00:05:29.578 --- 10.0.0.1 ping statistics --- 00:05:29.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.578 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=424419 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 424419 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 424419 ']' 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.578 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.578 [2024-11-06 13:29:51.995309] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:05:29.578 [2024-11-06 13:29:51.995368] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:29.578 [2024-11-06 13:29:52.094261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.578 [2024-11-06 13:29:52.145704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:29.578 [2024-11-06 13:29:52.145776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:29.578 [2024-11-06 13:29:52.145785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.578 [2024-11-06 13:29:52.145792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.579 [2024-11-06 13:29:52.145798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:29.579 [2024-11-06 13:29:52.147588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.579 [2024-11-06 13:29:52.147788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.579 [2024-11-06 13:29:52.147848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.579 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.579 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:29.579 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:29.579 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.579 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.579 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:29.579 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:29.579 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:29.839 [2024-11-06 13:29:52.991494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.839 13:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:29.839 13:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:30.099 [2024-11-06 13:29:53.361089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:30.100 13:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:30.360 13:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:30.621 Malloc0 00:05:30.621 13:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:30.621 Delay0 00:05:30.621 13:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.881 13:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:31.142 NULL1 00:05:31.142 13:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:31.142 13:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=425023 00:05:31.142 13:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:31.142 13:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:31.142 13:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.526 Read completed with error (sct=0, sc=11) 00:05:32.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.526 13:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.526 13:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:32.526 13:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:32.788 true 00:05:32.788 13:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:32.788 13:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.731 13:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.731 13:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:33.731 13:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:33.991 true 00:05:33.991 13:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:33.991 13:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.251 13:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.630 13:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:34.630 13:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:34.630 true 00:05:34.630 13:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:34.630 13:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.575 13:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.836 13:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:35.836 13:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:36.097 true 00:05:36.097 13:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:36.097 13:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.038 13:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.038 13:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:37.038 13:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:37.299 true 00:05:37.299 13:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:37.299 13:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.299 13:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.560 13:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:37.561 13:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:37.821 true 00:05:37.821 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:37.821 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.821 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.082 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:38.082 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:38.343 true 00:05:38.343 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:38.343 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.604 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.604 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:38.604 13:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:38.864 true 00:05:38.864 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:38.864 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.125 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.125 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:39.125 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:39.386 true 00:05:39.386 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:39.386 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.647 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.647 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:39.647 13:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:39.907 true 00:05:39.907 13:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:39.907 13:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.292 13:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.292 13:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:41.292 13:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:41.553 true 00:05:41.553 13:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:41.553 13:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.495 13:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.495 13:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:42.495 13:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:42.756 true 00:05:42.756 13:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:42.756 13:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.756 13:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.017 13:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:43.017 13:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:43.278 true 00:05:43.278 13:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:43.278 13:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.660 13:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.660 13:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:44.660 13:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:44.660 true 00:05:44.660 13:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:44.660 13:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.601 13:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.862 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:45.862 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:45.862 true 00:05:45.862 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:45.862 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.122 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.384 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:46.384 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:46.384 true 00:05:46.644 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:46.644 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.644 13:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.904 13:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:46.904 13:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:47.165 true 00:05:47.165 13:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:47.165 13:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.165 13:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.427 13:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:47.427 13:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:47.687 true 00:05:47.687 13:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:47.687 13:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.687 13:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.948 13:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:47.948 13:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:48.208 true 00:05:48.208 13:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:48.208 13:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.150 13:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.150 13:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:49.150 13:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:49.410 true 00:05:49.410 13:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:49.410 13:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.410 13:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.670 13:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:49.670 13:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:49.930 true 00:05:49.930 13:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:49.930 13:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.189 13:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.190 13:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:50.190 13:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:50.449 true 00:05:50.449 13:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:50.449 13:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.708 13:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.708 13:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:50.708 13:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:50.968 true 00:05:50.968 13:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:50.968 13:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.349 13:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.349 13:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:52.349 13:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:52.349 true 00:05:52.609 13:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:52.609 13:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.438 13:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.438 13:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:53.438 13:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:53.698 true 00:05:53.698 13:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:53.698 13:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.957 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.957 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:53.958 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:54.217 true 00:05:54.217 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:54.217 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.477 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.477 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:54.477 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:54.737 true 00:05:54.737 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:54.737 13:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.997 13:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.997 13:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:54.997 13:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:55.257 true 00:05:55.257 13:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:55.257 13:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.517 13:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.779 13:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:55.779 13:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:55.779 true 00:05:55.779 13:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:55.779 13:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.039 13:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.300 13:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:56.300 13:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:56.300 true 00:05:56.300 13:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:56.300 13:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.680 13:30:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.680 13:30:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:57.680 13:30:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:57.940 true 00:05:57.940 13:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:57.940 13:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.879 13:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.879 13:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:58.879 13:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:59.140 true 00:05:59.140 13:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:59.140 13:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.400 13:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.400 13:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:59.400 13:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:59.660 true 00:05:59.660 13:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:05:59.660 13:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.041 13:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.041 13:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:01.041 13:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:01.041 true 00:06:01.301 13:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:06:01.301 13:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.871 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.132 Initializing NVMe Controllers 00:06:02.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:02.132 Controller IO queue size 128, less than required. 00:06:02.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:02.132 Controller IO queue size 128, less than required. 00:06:02.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:02.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:02.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:02.132 Initialization complete. Launching workers. 00:06:02.132 ======================================================== 00:06:02.132 Latency(us) 00:06:02.132 Device Information : IOPS MiB/s Average min max 00:06:02.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2149.70 1.05 34340.71 2264.14 1024813.35 00:06:02.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16515.80 8.06 7750.10 1440.08 402030.22 00:06:02.132 ======================================================== 00:06:02.132 Total : 18665.49 9.11 10812.53 1440.08 1024813.35 00:06:02.132 00:06:02.132 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:02.132 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:02.392 true 00:06:02.392 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 425023 00:06:02.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (425023) - No such process 00:06:02.392 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 425023 00:06:02.392 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.652 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.652 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:02.652 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:02.652 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:02.652 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.652 13:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:02.913 null0 00:06:02.913 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.913 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.913 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:02.913 null1 00:06:03.174 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.174 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.174 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:03.174 null2 00:06:03.174 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.174 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.174 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:03.436 null3 00:06:03.436 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.436 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.436 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:03.696 null4 00:06:03.696 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.696 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.696 13:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:03.696 null5 00:06:03.696 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.696 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.696 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:03.957 null6 00:06:03.957 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.957 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.957 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:04.219 null7 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 432179 432180 432182 432184 432186 432188 432190 432192 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.219 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.480 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.481 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.481 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.481 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.481 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.743 13:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.743 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.743 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.743 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.743 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.743 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.005 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.006 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.267 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.528 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.789 13:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.789 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.789 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.789 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.789 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.789 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.789 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.789 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.789 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.789 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.051 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.313 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.574 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.835 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.835 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.835 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.835 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.836 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.836 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.836 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.836 13:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.836 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.096 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.096 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.096 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.097 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.358 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.619 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.878 13:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.878 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.878 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.878 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.878 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.878 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.878 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.878 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.879 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:08.139 rmmod nvme_tcp 00:06:08.139 rmmod nvme_fabrics 00:06:08.139 rmmod nvme_keyring 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 424419 ']' 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 424419 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 424419 ']' 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 424419 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:08.139 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 424419 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 424419' 00:06:08.399 killing process with pid 424419 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 424419 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 424419 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.399 13:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:10.942 00:06:10.942 real 0m49.375s 00:06:10.942 user 3m14.825s 00:06:10.942 sys 0m15.387s 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:10.942 ************************************ 00:06:10.942 END TEST nvmf_ns_hotplug_stress 00:06:10.942 ************************************ 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.942 ************************************ 00:06:10.942 START TEST nvmf_delete_subsystem 00:06:10.942 ************************************ 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:10.942 * Looking for test storage... 00:06:10.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.942 13:30:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:10.942 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.943 --rc genhtml_branch_coverage=1 00:06:10.943 --rc genhtml_function_coverage=1 00:06:10.943 --rc genhtml_legend=1 00:06:10.943 --rc geninfo_all_blocks=1 00:06:10.943 --rc geninfo_unexecuted_blocks=1 00:06:10.943 00:06:10.943 ' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.943 --rc genhtml_branch_coverage=1 00:06:10.943 --rc genhtml_function_coverage=1 00:06:10.943 --rc genhtml_legend=1 00:06:10.943 --rc geninfo_all_blocks=1 00:06:10.943 --rc geninfo_unexecuted_blocks=1 00:06:10.943 00:06:10.943 ' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.943 --rc genhtml_branch_coverage=1 00:06:10.943 --rc genhtml_function_coverage=1 00:06:10.943 --rc genhtml_legend=1 00:06:10.943 --rc geninfo_all_blocks=1 00:06:10.943 --rc geninfo_unexecuted_blocks=1 00:06:10.943 00:06:10.943 ' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.943 --rc genhtml_branch_coverage=1 00:06:10.943 --rc genhtml_function_coverage=1 00:06:10.943 --rc genhtml_legend=1 00:06:10.943 --rc geninfo_all_blocks=1 00:06:10.943 --rc geninfo_unexecuted_blocks=1 00:06:10.943 00:06:10.943 ' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.943 13:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.086 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:19.087 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:19.087 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:19.087 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:19.087 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:19.087 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:19.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:06:19.087 00:06:19.087 --- 10.0.0.2 ping statistics --- 00:06:19.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.088 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:06:19.088 00:06:19.088 --- 10.0.0.1 ping statistics --- 00:06:19.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.088 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=437368 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 437368 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 437368 ']' 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.088 13:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.088 [2024-11-06 13:30:41.556468] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:06:19.088 [2024-11-06 13:30:41.556531] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.088 [2024-11-06 13:30:41.640440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.088 [2024-11-06 13:30:41.681513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.088 [2024-11-06 13:30:41.681552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.088 [2024-11-06 13:30:41.681560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.088 [2024-11-06 13:30:41.681567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.088 [2024-11-06 13:30:41.681573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.088 [2024-11-06 13:30:41.682974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.088 [2024-11-06 13:30:41.683067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.088 [2024-11-06 13:30:42.396919] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.088 [2024-11-06 13:30:42.413083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.088 NULL1 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.088 Delay0 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=437716 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:19.088 13:30:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:19.349 [2024-11-06 13:30:42.507868] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:21.262 13:30:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:21.262 13:30:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.262 13:30:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 starting I/O failed: -6 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 starting I/O failed: -6 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 starting I/O failed: -6 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 starting I/O failed: -6 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 starting I/O failed: -6 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 starting I/O failed: -6 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 starting I/O failed: -6 00:06:21.523 Read completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.523 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 [2024-11-06 13:30:44.674502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f680 is same with the state(6) to be set 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 starting I/O failed: -6 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 [2024-11-06 13:30:44.677339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8904000c40 is same with the state(6) to be set 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Write completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:21.524 Read completed with error (sct=0, sc=8) 00:06:22.466 [2024-11-06 13:30:45.647254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a09a0 is same with the state(6) to be set 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 [2024-11-06 13:30:45.678724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f860 is same with the state(6) to be set 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 [2024-11-06 13:30:45.678816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f4a0 is same with the state(6) to be set 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 [2024-11-06 13:30:45.679568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f890400d020 is same with the state(6) to be set 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Write completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.466 Read completed with error (sct=0, sc=8) 00:06:22.467 Read completed with error (sct=0, sc=8) 00:06:22.467 Write completed with error (sct=0, sc=8) 00:06:22.467 [2024-11-06 13:30:45.680006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f890400d7c0 is same with the state(6) to be set 00:06:22.467 Initializing NVMe Controllers 00:06:22.467 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:22.467 Controller IO queue size 128, less than required. 00:06:22.467 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:22.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:22.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:22.467 Initialization complete. Launching workers. 00:06:22.467 ======================================================== 00:06:22.467 Latency(us) 00:06:22.467 Device Information : IOPS MiB/s Average min max 00:06:22.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.67 0.08 905024.33 229.97 1008647.45 00:06:22.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.72 0.08 928194.14 290.17 1010951.42 00:06:22.467 ======================================================== 00:06:22.467 Total : 321.38 0.16 916250.57 229.97 1010951.42 00:06:22.467 00:06:22.467 [2024-11-06 13:30:45.680519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a09a0 (9): Bad file descriptor 00:06:22.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:22.467 13:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.467 13:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:22.467 13:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 437716 00:06:22.467 13:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 437716 00:06:23.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (437716) - No such process 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 437716 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 437716 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 437716 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.037 [2024-11-06 13:30:46.210620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.037 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.038 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.038 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.038 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.038 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.038 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=438399 00:06:23.038 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:23.038 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:23.038 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 438399 00:06:23.038 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.038 [2024-11-06 13:30:46.290031] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:23.608 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.608 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 438399 00:06:23.608 13:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.869 13:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.869 13:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 438399 00:06:23.869 13:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.441 13:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.441 13:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 438399 00:06:24.441 13:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.010 13:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.010 13:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 438399 00:06:25.010 13:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.579 13:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.579 13:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 438399 00:06:25.579 13:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.147 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.147 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 438399 00:06:26.147 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.407 Initializing NVMe Controllers 00:06:26.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:26.407 Controller IO queue size 128, less than required. 00:06:26.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:26.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:26.407 Initialization complete. Launching workers. 00:06:26.407 ======================================================== 00:06:26.407 Latency(us) 00:06:26.407 Device Information : IOPS MiB/s Average min max 00:06:26.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002479.95 1000163.69 1042700.15 00:06:26.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002910.93 1000242.58 1009419.67 00:06:26.407 ======================================================== 00:06:26.407 Total : 256.00 0.12 1002695.44 1000163.69 1042700.15 00:06:26.407 00:06:26.407 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.407 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 438399 00:06:26.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (438399) - No such process 00:06:26.407 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 438399 00:06:26.408 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:26.408 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:26.408 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:26.408 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:26.408 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:26.408 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:26.408 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:26.408 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:26.408 rmmod nvme_tcp 00:06:26.668 rmmod nvme_fabrics 00:06:26.668 rmmod nvme_keyring 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 437368 ']' 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 437368 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 437368 ']' 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 437368 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 437368 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 437368' 00:06:26.668 killing process with pid 437368 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 437368 00:06:26.668 13:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 437368 00:06:26.668 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:26.668 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:26.668 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:26.668 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:26.668 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:26.669 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:26.669 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:26.669 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:26.669 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:26.669 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.669 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.669 13:30:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:29.254 00:06:29.254 real 0m18.251s 00:06:29.254 user 0m30.730s 00:06:29.254 sys 0m6.881s 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.254 ************************************ 00:06:29.254 END TEST nvmf_delete_subsystem 00:06:29.254 ************************************ 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.254 ************************************ 00:06:29.254 START TEST nvmf_host_management 00:06:29.254 ************************************ 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:29.254 * Looking for test storage... 00:06:29.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:29.254 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.255 --rc genhtml_branch_coverage=1 00:06:29.255 --rc genhtml_function_coverage=1 00:06:29.255 --rc genhtml_legend=1 00:06:29.255 --rc geninfo_all_blocks=1 00:06:29.255 --rc geninfo_unexecuted_blocks=1 00:06:29.255 00:06:29.255 ' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.255 --rc genhtml_branch_coverage=1 00:06:29.255 --rc genhtml_function_coverage=1 00:06:29.255 --rc genhtml_legend=1 00:06:29.255 --rc geninfo_all_blocks=1 00:06:29.255 --rc geninfo_unexecuted_blocks=1 00:06:29.255 00:06:29.255 ' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.255 --rc genhtml_branch_coverage=1 00:06:29.255 --rc genhtml_function_coverage=1 00:06:29.255 --rc genhtml_legend=1 00:06:29.255 --rc geninfo_all_blocks=1 00:06:29.255 --rc geninfo_unexecuted_blocks=1 00:06:29.255 00:06:29.255 ' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.255 --rc genhtml_branch_coverage=1 00:06:29.255 --rc genhtml_function_coverage=1 00:06:29.255 --rc genhtml_legend=1 00:06:29.255 --rc geninfo_all_blocks=1 00:06:29.255 --rc geninfo_unexecuted_blocks=1 00:06:29.255 00:06:29.255 ' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:29.255 13:30:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:37.384 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:37.384 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:37.384 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:37.384 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.384 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:37.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:06:37.385 00:06:37.385 --- 10.0.0.2 ping statistics --- 00:06:37.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.385 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:06:37.385 00:06:37.385 --- 10.0.0.1 ping statistics --- 00:06:37.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.385 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=443422 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 443422 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 443422 ']' 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.385 13:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.385 [2024-11-06 13:30:59.930227] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:06:37.385 [2024-11-06 13:30:59.930315] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.385 [2024-11-06 13:31:00.029994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.385 [2024-11-06 13:31:00.086691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.385 [2024-11-06 13:31:00.086756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.385 [2024-11-06 13:31:00.086765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.385 [2024-11-06 13:31:00.086773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.385 [2024-11-06 13:31:00.086781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.385 [2024-11-06 13:31:00.088790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.385 [2024-11-06 13:31:00.089011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.385 [2024-11-06 13:31:00.089176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.385 [2024-11-06 13:31:00.089177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.385 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.385 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:37.385 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.385 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.385 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.646 [2024-11-06 13:31:00.780599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.646 Malloc0 00:06:37.646 [2024-11-06 13:31:00.854878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=443549 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 443549 /var/tmp/bdevperf.sock 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 443549 ']' 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:37.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:37.646 { 00:06:37.646 "params": { 00:06:37.646 "name": "Nvme$subsystem", 00:06:37.646 "trtype": "$TEST_TRANSPORT", 00:06:37.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:37.646 "adrfam": "ipv4", 00:06:37.646 "trsvcid": "$NVMF_PORT", 00:06:37.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:37.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:37.646 "hdgst": ${hdgst:-false}, 00:06:37.646 "ddgst": ${ddgst:-false} 00:06:37.646 }, 00:06:37.646 "method": "bdev_nvme_attach_controller" 00:06:37.646 } 00:06:37.646 EOF 00:06:37.646 )") 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:37.646 13:31:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:37.646 "params": { 00:06:37.646 "name": "Nvme0", 00:06:37.646 "trtype": "tcp", 00:06:37.646 "traddr": "10.0.0.2", 00:06:37.646 "adrfam": "ipv4", 00:06:37.646 "trsvcid": "4420", 00:06:37.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:37.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:37.646 "hdgst": false, 00:06:37.646 "ddgst": false 00:06:37.646 }, 00:06:37.646 "method": "bdev_nvme_attach_controller" 00:06:37.646 }' 00:06:37.646 [2024-11-06 13:31:00.956670] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:06:37.646 [2024-11-06 13:31:00.956724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443549 ] 00:06:37.907 [2024-11-06 13:31:01.027661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.907 [2024-11-06 13:31:01.064212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.167 Running I/O for 10 seconds... 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:38.428 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.429 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.429 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:38.690 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.690 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=662 00:06:38.690 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 662 -ge 100 ']' 00:06:38.690 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:38.690 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:38.690 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:38.690 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:38.690 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.690 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.690 [2024-11-06 13:31:01.829958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.690 [2024-11-06 13:31:01.830007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.690 [2024-11-06 13:31:01.830015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.690 [2024-11-06 13:31:01.830022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.690 [2024-11-06 13:31:01.830029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.690 [2024-11-06 13:31:01.830036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.690 [2024-11-06 13:31:01.830042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 [2024-11-06 13:31:01.830049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 [2024-11-06 13:31:01.830056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 [2024-11-06 13:31:01.830062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 [2024-11-06 13:31:01.830069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 [2024-11-06 13:31:01.830075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 [2024-11-06 13:31:01.830082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 [2024-11-06 13:31:01.830088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 [2024-11-06 13:31:01.830095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 [2024-11-06 13:31:01.830101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e130 is same with the state(6) to be set 00:06:38.691 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.691 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:38.691 [2024-11-06 13:31:01.835645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:38.691 [2024-11-06 13:31:01.835682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.835693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:38.691 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.691 [2024-11-06 13:31:01.835702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.835718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:38.691 [2024-11-06 13:31:01.835726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.835734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:38.691 [2024-11-06 13:31:01.835742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.835756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339000 is same with the state(6) to be set 00:06:38.691 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.691 [2024-11-06 13:31:01.836211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.691 [2024-11-06 13:31:01.836731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.691 [2024-11-06 13:31:01.836741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.836985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.836994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.837345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.692 [2024-11-06 13:31:01.837352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.692 [2024-11-06 13:31:01.838590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:38.692 task offset: 99328 on job bdev=Nvme0n1 fails 00:06:38.692 00:06:38.692 Latency(us) 00:06:38.692 [2024-11-06T12:31:02.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.692 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:38.692 Job: Nvme0n1 ended in about 0.51 seconds with error 00:06:38.692 Verification LBA range: start 0x0 length 0x400 00:06:38.692 Nvme0n1 : 0.51 1519.32 94.96 126.61 0.00 37869.99 2170.88 32986.45 00:06:38.692 [2024-11-06T12:31:02.068Z] =================================================================================================================== 00:06:38.692 [2024-11-06T12:31:02.068Z] Total : 1519.32 94.96 126.61 0.00 37869.99 2170.88 32986.45 00:06:38.692 [2024-11-06 13:31:01.840575] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.692 [2024-11-06 13:31:01.840596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339000 (9): Bad file descriptor 00:06:38.692 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.693 13:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:38.693 [2024-11-06 13:31:01.893011] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 443549 00:06:39.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (443549) - No such process 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:39.635 { 00:06:39.635 "params": { 00:06:39.635 "name": "Nvme$subsystem", 00:06:39.635 "trtype": "$TEST_TRANSPORT", 00:06:39.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:39.635 "adrfam": "ipv4", 00:06:39.635 "trsvcid": "$NVMF_PORT", 00:06:39.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:39.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:39.635 "hdgst": ${hdgst:-false}, 00:06:39.635 "ddgst": ${ddgst:-false} 00:06:39.635 }, 00:06:39.635 "method": "bdev_nvme_attach_controller" 00:06:39.635 } 00:06:39.635 EOF 00:06:39.635 )") 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:39.635 13:31:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:39.635 "params": { 00:06:39.635 "name": "Nvme0", 00:06:39.635 "trtype": "tcp", 00:06:39.635 "traddr": "10.0.0.2", 00:06:39.635 "adrfam": "ipv4", 00:06:39.635 "trsvcid": "4420", 00:06:39.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:39.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:39.635 "hdgst": false, 00:06:39.635 "ddgst": false 00:06:39.635 }, 00:06:39.635 "method": "bdev_nvme_attach_controller" 00:06:39.635 }' 00:06:39.635 [2024-11-06 13:31:02.905923] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:06:39.635 [2024-11-06 13:31:02.905980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444031 ] 00:06:39.635 [2024-11-06 13:31:02.976462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.898 [2024-11-06 13:31:03.010913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.898 Running I/O for 1 seconds... 00:06:41.100 1600.00 IOPS, 100.00 MiB/s 00:06:41.100 Latency(us) 00:06:41.100 [2024-11-06T12:31:04.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.100 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:41.100 Verification LBA range: start 0x0 length 0x400 00:06:41.100 Nvme0n1 : 1.02 1635.21 102.20 0.00 0.00 38457.29 6144.00 32331.09 00:06:41.100 [2024-11-06T12:31:04.476Z] =================================================================================================================== 00:06:41.100 [2024-11-06T12:31:04.476Z] Total : 1635.21 102.20 0.00 0.00 38457.29 6144.00 32331.09 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:41.100 rmmod nvme_tcp 00:06:41.100 rmmod nvme_fabrics 00:06:41.100 rmmod nvme_keyring 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 443422 ']' 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 443422 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 443422 ']' 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 443422 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 443422 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 443422' 00:06:41.100 killing process with pid 443422 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 443422 00:06:41.100 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 443422 00:06:41.361 [2024-11-06 13:31:04.559078] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.361 13:31:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:43.904 00:06:43.904 real 0m14.509s 00:06:43.904 user 0m22.567s 00:06:43.904 sys 0m6.768s 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.904 ************************************ 00:06:43.904 END TEST nvmf_host_management 00:06:43.904 ************************************ 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.904 ************************************ 00:06:43.904 START TEST nvmf_lvol 00:06:43.904 ************************************ 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:43.904 * Looking for test storage... 00:06:43.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.904 --rc genhtml_branch_coverage=1 00:06:43.904 --rc genhtml_function_coverage=1 00:06:43.904 --rc genhtml_legend=1 00:06:43.904 --rc geninfo_all_blocks=1 00:06:43.904 --rc geninfo_unexecuted_blocks=1 00:06:43.904 00:06:43.904 ' 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.904 --rc genhtml_branch_coverage=1 00:06:43.904 --rc genhtml_function_coverage=1 00:06:43.904 --rc genhtml_legend=1 00:06:43.904 --rc geninfo_all_blocks=1 00:06:43.904 --rc geninfo_unexecuted_blocks=1 00:06:43.904 00:06:43.904 ' 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.904 --rc genhtml_branch_coverage=1 00:06:43.904 --rc genhtml_function_coverage=1 00:06:43.904 --rc genhtml_legend=1 00:06:43.904 --rc geninfo_all_blocks=1 00:06:43.904 --rc geninfo_unexecuted_blocks=1 00:06:43.904 00:06:43.904 ' 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.904 --rc genhtml_branch_coverage=1 00:06:43.904 --rc genhtml_function_coverage=1 00:06:43.904 --rc genhtml_legend=1 00:06:43.904 --rc geninfo_all_blocks=1 00:06:43.904 --rc geninfo_unexecuted_blocks=1 00:06:43.904 00:06:43.904 ' 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.904 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:43.905 13:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.039 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:52.040 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:52.040 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:52.040 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:52.040 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:52.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:06:52.040 00:06:52.040 --- 10.0.0.2 ping statistics --- 00:06:52.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.040 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:06:52.040 00:06:52.040 --- 10.0.0.1 ping statistics --- 00:06:52.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.040 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=448515 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 448515 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 448515 ']' 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.040 13:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:52.040 [2024-11-06 13:31:14.422521] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:06:52.040 [2024-11-06 13:31:14.422569] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.040 [2024-11-06 13:31:14.499453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.040 [2024-11-06 13:31:14.534257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.040 [2024-11-06 13:31:14.534291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.040 [2024-11-06 13:31:14.534299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.040 [2024-11-06 13:31:14.534306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.040 [2024-11-06 13:31:14.534312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.040 [2024-11-06 13:31:14.535814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.040 [2024-11-06 13:31:14.536064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.041 [2024-11-06 13:31:14.536068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.041 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.041 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:52.041 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:52.041 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:52.041 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:52.041 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.041 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:52.301 [2024-11-06 13:31:15.417623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.301 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:52.301 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:52.301 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:52.560 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:52.560 13:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:52.820 13:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:53.080 13:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0cdc9b2c-f36b-4369-98fa-3898142c24e4 00:06:53.080 13:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0cdc9b2c-f36b-4369-98fa-3898142c24e4 lvol 20 00:06:53.080 13:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1b9a46fd-de82-4381-9949-db7e97f85ef4 00:06:53.081 13:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:53.341 13:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b9a46fd-de82-4381-9949-db7e97f85ef4 00:06:53.601 13:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:53.602 [2024-11-06 13:31:16.898644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.602 13:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.861 13:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=449215 00:06:53.861 13:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:53.861 13:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:54.801 13:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1b9a46fd-de82-4381-9949-db7e97f85ef4 MY_SNAPSHOT 00:06:55.061 13:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a6059e39-4bdf-4939-8ef7-79ee496fcd6c 00:06:55.061 13:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1b9a46fd-de82-4381-9949-db7e97f85ef4 30 00:06:55.321 13:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a6059e39-4bdf-4939-8ef7-79ee496fcd6c MY_CLONE 00:06:55.581 13:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c2e7d191-8db8-4228-97db-19cfb5c3a835 00:06:55.581 13:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c2e7d191-8db8-4228-97db-19cfb5c3a835 00:06:56.151 13:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 449215 00:07:04.289 Initializing NVMe Controllers 00:07:04.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:04.289 Controller IO queue size 128, less than required. 00:07:04.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:04.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:04.289 Initialization complete. Launching workers. 00:07:04.289 ======================================================== 00:07:04.289 Latency(us) 00:07:04.289 Device Information : IOPS MiB/s Average min max 00:07:04.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12161.90 47.51 10527.81 1517.03 41821.79 00:07:04.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17569.60 68.63 7286.47 318.06 54549.81 00:07:04.289 ======================================================== 00:07:04.289 Total : 29731.50 116.14 8612.36 318.06 54549.81 00:07:04.289 00:07:04.289 13:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:04.289 13:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1b9a46fd-de82-4381-9949-db7e97f85ef4 00:07:04.550 13:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0cdc9b2c-f36b-4369-98fa-3898142c24e4 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:04.811 rmmod nvme_tcp 00:07:04.811 rmmod nvme_fabrics 00:07:04.811 rmmod nvme_keyring 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 448515 ']' 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 448515 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 448515 ']' 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 448515 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 448515 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 448515' 00:07:04.811 killing process with pid 448515 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 448515 00:07:04.811 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 448515 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.072 13:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:07.623 00:07:07.623 real 0m23.622s 00:07:07.623 user 1m4.240s 00:07:07.623 sys 0m8.375s 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:07.623 ************************************ 00:07:07.623 END TEST nvmf_lvol 00:07:07.623 ************************************ 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.623 ************************************ 00:07:07.623 START TEST nvmf_lvs_grow 00:07:07.623 ************************************ 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:07.623 * Looking for test storage... 00:07:07.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.623 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:07.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.624 --rc genhtml_branch_coverage=1 00:07:07.624 --rc genhtml_function_coverage=1 00:07:07.624 --rc genhtml_legend=1 00:07:07.624 --rc geninfo_all_blocks=1 00:07:07.624 --rc geninfo_unexecuted_blocks=1 00:07:07.624 00:07:07.624 ' 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:07.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.624 --rc genhtml_branch_coverage=1 00:07:07.624 --rc genhtml_function_coverage=1 00:07:07.624 --rc genhtml_legend=1 00:07:07.624 --rc geninfo_all_blocks=1 00:07:07.624 --rc geninfo_unexecuted_blocks=1 00:07:07.624 00:07:07.624 ' 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:07.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.624 --rc genhtml_branch_coverage=1 00:07:07.624 --rc genhtml_function_coverage=1 00:07:07.624 --rc genhtml_legend=1 00:07:07.624 --rc geninfo_all_blocks=1 00:07:07.624 --rc geninfo_unexecuted_blocks=1 00:07:07.624 00:07:07.624 ' 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:07.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.624 --rc genhtml_branch_coverage=1 00:07:07.624 --rc genhtml_function_coverage=1 00:07:07.624 --rc genhtml_legend=1 00:07:07.624 --rc geninfo_all_blocks=1 00:07:07.624 --rc geninfo_unexecuted_blocks=1 00:07:07.624 00:07:07.624 ' 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.624 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.625 13:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:15.763 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:15.763 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:15.763 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:15.763 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:15.763 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:07:15.764 00:07:15.764 --- 10.0.0.2 ping statistics --- 00:07:15.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.764 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:07:15.764 00:07:15.764 --- 10.0.0.1 ping statistics --- 00:07:15.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.764 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=455600 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 455600 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 455600 ']' 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.764 13:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:15.764 [2024-11-06 13:31:38.037382] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:07:15.764 [2024-11-06 13:31:38.037459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.764 [2024-11-06 13:31:38.121206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.764 [2024-11-06 13:31:38.164443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.764 [2024-11-06 13:31:38.164481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.764 [2024-11-06 13:31:38.164493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.764 [2024-11-06 13:31:38.164500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.764 [2024-11-06 13:31:38.164505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.764 [2024-11-06 13:31:38.165139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.764 13:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.764 13:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:15.764 13:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.764 13:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.764 13:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.764 13:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.764 13:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:15.764 [2024-11-06 13:31:39.006763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.764 ************************************ 00:07:15.764 START TEST lvs_grow_clean 00:07:15.764 ************************************ 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:15.764 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:16.025 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:16.025 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:16.289 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3fdea0ec-148e-4f0b-b959-971929201311 00:07:16.289 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:16.289 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:16.289 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:16.289 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:16.289 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3fdea0ec-148e-4f0b-b959-971929201311 lvol 150 00:07:16.549 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=781745df-5676-4741-9dca-dbbf556c379e 00:07:16.549 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:16.549 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:16.809 [2024-11-06 13:31:39.937951] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:16.809 [2024-11-06 13:31:39.938006] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:16.809 true 00:07:16.809 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:16.809 13:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:16.809 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:16.809 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:17.071 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 781745df-5676-4741-9dca-dbbf556c379e 00:07:17.331 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:17.331 [2024-11-06 13:31:40.632094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.332 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=456302 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 456302 /var/tmp/bdevperf.sock 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 456302 ']' 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:17.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:17.592 13:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:17.592 [2024-11-06 13:31:40.867170] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:07:17.592 [2024-11-06 13:31:40.867223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456302 ] 00:07:17.592 [2024-11-06 13:31:40.952947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.853 [2024-11-06 13:31:40.988896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.426 13:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:18.426 13:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:18.426 13:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:18.688 Nvme0n1 00:07:18.688 13:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:18.949 [ 00:07:18.949 { 00:07:18.949 "name": "Nvme0n1", 00:07:18.949 "aliases": [ 00:07:18.949 "781745df-5676-4741-9dca-dbbf556c379e" 00:07:18.949 ], 00:07:18.949 "product_name": "NVMe disk", 00:07:18.949 "block_size": 4096, 00:07:18.949 "num_blocks": 38912, 00:07:18.949 "uuid": "781745df-5676-4741-9dca-dbbf556c379e", 00:07:18.949 "numa_id": 0, 00:07:18.949 "assigned_rate_limits": { 00:07:18.949 "rw_ios_per_sec": 0, 00:07:18.949 "rw_mbytes_per_sec": 0, 00:07:18.949 "r_mbytes_per_sec": 0, 00:07:18.949 "w_mbytes_per_sec": 0 00:07:18.949 }, 00:07:18.949 "claimed": false, 00:07:18.949 "zoned": false, 00:07:18.949 "supported_io_types": { 00:07:18.949 "read": true, 00:07:18.949 "write": true, 00:07:18.949 "unmap": true, 00:07:18.949 "flush": true, 00:07:18.949 "reset": true, 00:07:18.949 "nvme_admin": true, 00:07:18.949 "nvme_io": true, 00:07:18.949 "nvme_io_md": false, 00:07:18.949 "write_zeroes": true, 00:07:18.949 "zcopy": false, 00:07:18.949 "get_zone_info": false, 00:07:18.949 "zone_management": false, 00:07:18.949 "zone_append": false, 00:07:18.949 "compare": true, 00:07:18.949 "compare_and_write": true, 00:07:18.949 "abort": true, 00:07:18.949 "seek_hole": false, 00:07:18.949 "seek_data": false, 00:07:18.949 "copy": true, 00:07:18.949 "nvme_iov_md": false 00:07:18.949 }, 00:07:18.949 "memory_domains": [ 00:07:18.949 { 00:07:18.949 "dma_device_id": "system", 00:07:18.949 "dma_device_type": 1 00:07:18.949 } 00:07:18.949 ], 00:07:18.949 "driver_specific": { 00:07:18.949 "nvme": [ 00:07:18.949 { 00:07:18.949 "trid": { 00:07:18.949 "trtype": "TCP", 00:07:18.949 "adrfam": "IPv4", 00:07:18.949 "traddr": "10.0.0.2", 00:07:18.949 "trsvcid": "4420", 00:07:18.949 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:18.949 }, 00:07:18.949 "ctrlr_data": { 00:07:18.949 "cntlid": 1, 00:07:18.949 "vendor_id": "0x8086", 00:07:18.949 "model_number": "SPDK bdev Controller", 00:07:18.949 "serial_number": "SPDK0", 00:07:18.949 "firmware_revision": "25.01", 00:07:18.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:18.949 "oacs": { 00:07:18.949 "security": 0, 00:07:18.949 "format": 0, 00:07:18.949 "firmware": 0, 00:07:18.949 "ns_manage": 0 00:07:18.949 }, 00:07:18.949 "multi_ctrlr": true, 00:07:18.949 "ana_reporting": false 00:07:18.949 }, 00:07:18.949 "vs": { 00:07:18.949 "nvme_version": "1.3" 00:07:18.949 }, 00:07:18.949 "ns_data": { 00:07:18.949 "id": 1, 00:07:18.949 "can_share": true 00:07:18.949 } 00:07:18.949 } 00:07:18.949 ], 00:07:18.949 "mp_policy": "active_passive" 00:07:18.949 } 00:07:18.949 } 00:07:18.949 ] 00:07:18.949 13:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=456481 00:07:18.950 13:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:18.950 13:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:18.950 Running I/O for 10 seconds... 00:07:20.335 Latency(us) 00:07:20.335 [2024-11-06T12:31:43.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.335 Nvme0n1 : 1.00 17910.00 69.96 0.00 0.00 0.00 0.00 0.00 00:07:20.335 [2024-11-06T12:31:43.711Z] =================================================================================================================== 00:07:20.335 [2024-11-06T12:31:43.711Z] Total : 17910.00 69.96 0.00 0.00 0.00 0.00 0.00 00:07:20.335 00:07:20.909 13:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:20.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.909 Nvme0n1 : 2.00 17939.50 70.08 0.00 0.00 0.00 0.00 0.00 00:07:20.909 [2024-11-06T12:31:44.285Z] =================================================================================================================== 00:07:20.909 [2024-11-06T12:31:44.285Z] Total : 17939.50 70.08 0.00 0.00 0.00 0.00 0.00 00:07:20.909 00:07:21.170 true 00:07:21.170 13:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:21.170 13:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:21.430 13:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:21.430 13:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:21.430 13:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 456481 00:07:22.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.001 Nvme0n1 : 3.00 17990.33 70.27 0.00 0.00 0.00 0.00 0.00 00:07:22.001 [2024-11-06T12:31:45.377Z] =================================================================================================================== 00:07:22.001 [2024-11-06T12:31:45.377Z] Total : 17990.33 70.27 0.00 0.00 0.00 0.00 0.00 00:07:22.001 00:07:22.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.941 Nvme0n1 : 4.00 18026.25 70.42 0.00 0.00 0.00 0.00 0.00 00:07:22.941 [2024-11-06T12:31:46.317Z] =================================================================================================================== 00:07:22.941 [2024-11-06T12:31:46.317Z] Total : 18026.25 70.42 0.00 0.00 0.00 0.00 0.00 00:07:22.941 00:07:24.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.324 Nvme0n1 : 5.00 18045.40 70.49 0.00 0.00 0.00 0.00 0.00 00:07:24.324 [2024-11-06T12:31:47.700Z] =================================================================================================================== 00:07:24.324 [2024-11-06T12:31:47.700Z] Total : 18045.40 70.49 0.00 0.00 0.00 0.00 0.00 00:07:24.324 00:07:25.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.266 Nvme0n1 : 6.00 18073.67 70.60 0.00 0.00 0.00 0.00 0.00 00:07:25.266 [2024-11-06T12:31:48.642Z] =================================================================================================================== 00:07:25.266 [2024-11-06T12:31:48.642Z] Total : 18073.67 70.60 0.00 0.00 0.00 0.00 0.00 00:07:25.266 00:07:26.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.207 Nvme0n1 : 7.00 18081.43 70.63 0.00 0.00 0.00 0.00 0.00 00:07:26.207 [2024-11-06T12:31:49.583Z] =================================================================================================================== 00:07:26.207 [2024-11-06T12:31:49.583Z] Total : 18081.43 70.63 0.00 0.00 0.00 0.00 0.00 00:07:26.207 00:07:27.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.151 Nvme0n1 : 8.00 18095.50 70.69 0.00 0.00 0.00 0.00 0.00 00:07:27.151 [2024-11-06T12:31:50.527Z] =================================================================================================================== 00:07:27.151 [2024-11-06T12:31:50.527Z] Total : 18095.50 70.69 0.00 0.00 0.00 0.00 0.00 00:07:27.151 00:07:28.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.093 Nvme0n1 : 9.00 18098.22 70.70 0.00 0.00 0.00 0.00 0.00 00:07:28.093 [2024-11-06T12:31:51.469Z] =================================================================================================================== 00:07:28.093 [2024-11-06T12:31:51.469Z] Total : 18098.22 70.70 0.00 0.00 0.00 0.00 0.00 00:07:28.093 00:07:29.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.033 Nvme0n1 : 10.00 18117.10 70.77 0.00 0.00 0.00 0.00 0.00 00:07:29.033 [2024-11-06T12:31:52.409Z] =================================================================================================================== 00:07:29.033 [2024-11-06T12:31:52.409Z] Total : 18117.10 70.77 0.00 0.00 0.00 0.00 0.00 00:07:29.033 00:07:29.033 00:07:29.033 Latency(us) 00:07:29.033 [2024-11-06T12:31:52.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.033 Nvme0n1 : 10.01 18118.12 70.77 0.00 0.00 7062.36 2061.65 13216.43 00:07:29.033 [2024-11-06T12:31:52.409Z] =================================================================================================================== 00:07:29.033 [2024-11-06T12:31:52.409Z] Total : 18118.12 70.77 0.00 0.00 7062.36 2061.65 13216.43 00:07:29.033 { 00:07:29.033 "results": [ 00:07:29.033 { 00:07:29.033 "job": "Nvme0n1", 00:07:29.033 "core_mask": "0x2", 00:07:29.033 "workload": "randwrite", 00:07:29.033 "status": "finished", 00:07:29.033 "queue_depth": 128, 00:07:29.033 "io_size": 4096, 00:07:29.033 "runtime": 10.006503, 00:07:29.033 "iops": 18118.117788002462, 00:07:29.033 "mibps": 70.77389760938462, 00:07:29.033 "io_failed": 0, 00:07:29.033 "io_timeout": 0, 00:07:29.033 "avg_latency_us": 7062.361325618637, 00:07:29.033 "min_latency_us": 2061.653333333333, 00:07:29.033 "max_latency_us": 13216.426666666666 00:07:29.033 } 00:07:29.033 ], 00:07:29.033 "core_count": 1 00:07:29.033 } 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 456302 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 456302 ']' 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 456302 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 456302 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 456302' 00:07:29.033 killing process with pid 456302 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 456302 00:07:29.033 Received shutdown signal, test time was about 10.000000 seconds 00:07:29.033 00:07:29.033 Latency(us) 00:07:29.033 [2024-11-06T12:31:52.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.033 [2024-11-06T12:31:52.409Z] =================================================================================================================== 00:07:29.033 [2024-11-06T12:31:52.409Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:29.033 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 456302 00:07:29.293 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:29.293 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:29.554 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:29.554 13:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:29.813 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:29.813 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:29.813 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:29.813 [2024-11-06 13:31:53.160309] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:30.073 request: 00:07:30.073 { 00:07:30.073 "uuid": "3fdea0ec-148e-4f0b-b959-971929201311", 00:07:30.073 "method": "bdev_lvol_get_lvstores", 00:07:30.073 "req_id": 1 00:07:30.073 } 00:07:30.073 Got JSON-RPC error response 00:07:30.073 response: 00:07:30.073 { 00:07:30.073 "code": -19, 00:07:30.073 "message": "No such device" 00:07:30.073 } 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.073 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:30.334 aio_bdev 00:07:30.334 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 781745df-5676-4741-9dca-dbbf556c379e 00:07:30.334 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=781745df-5676-4741-9dca-dbbf556c379e 00:07:30.334 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:30.334 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:30.334 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:30.334 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:30.334 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:30.334 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 781745df-5676-4741-9dca-dbbf556c379e -t 2000 00:07:30.595 [ 00:07:30.595 { 00:07:30.595 "name": "781745df-5676-4741-9dca-dbbf556c379e", 00:07:30.595 "aliases": [ 00:07:30.595 "lvs/lvol" 00:07:30.595 ], 00:07:30.595 "product_name": "Logical Volume", 00:07:30.595 "block_size": 4096, 00:07:30.595 "num_blocks": 38912, 00:07:30.595 "uuid": "781745df-5676-4741-9dca-dbbf556c379e", 00:07:30.595 "assigned_rate_limits": { 00:07:30.595 "rw_ios_per_sec": 0, 00:07:30.595 "rw_mbytes_per_sec": 0, 00:07:30.595 "r_mbytes_per_sec": 0, 00:07:30.595 "w_mbytes_per_sec": 0 00:07:30.595 }, 00:07:30.595 "claimed": false, 00:07:30.595 "zoned": false, 00:07:30.595 "supported_io_types": { 00:07:30.595 "read": true, 00:07:30.595 "write": true, 00:07:30.595 "unmap": true, 00:07:30.595 "flush": false, 00:07:30.595 "reset": true, 00:07:30.595 "nvme_admin": false, 00:07:30.595 "nvme_io": false, 00:07:30.595 "nvme_io_md": false, 00:07:30.595 "write_zeroes": true, 00:07:30.595 "zcopy": false, 00:07:30.595 "get_zone_info": false, 00:07:30.595 "zone_management": false, 00:07:30.595 "zone_append": false, 00:07:30.595 "compare": false, 00:07:30.595 "compare_and_write": false, 00:07:30.595 "abort": false, 00:07:30.595 "seek_hole": true, 00:07:30.595 "seek_data": true, 00:07:30.595 "copy": false, 00:07:30.595 "nvme_iov_md": false 00:07:30.595 }, 00:07:30.595 "driver_specific": { 00:07:30.595 "lvol": { 00:07:30.595 "lvol_store_uuid": "3fdea0ec-148e-4f0b-b959-971929201311", 00:07:30.595 "base_bdev": "aio_bdev", 00:07:30.595 "thin_provision": false, 00:07:30.595 "num_allocated_clusters": 38, 00:07:30.595 "snapshot": false, 00:07:30.595 "clone": false, 00:07:30.595 "esnap_clone": false 00:07:30.595 } 00:07:30.595 } 00:07:30.595 } 00:07:30.595 ] 00:07:30.595 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:30.595 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:30.595 13:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:30.856 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:30.856 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:30.856 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:30.856 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:30.856 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 781745df-5676-4741-9dca-dbbf556c379e 00:07:31.116 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fdea0ec-148e-4f0b-b959-971929201311 00:07:31.376 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:31.376 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:31.376 00:07:31.376 real 0m15.665s 00:07:31.376 user 0m15.418s 00:07:31.376 sys 0m1.311s 00:07:31.376 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.376 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:31.376 ************************************ 00:07:31.376 END TEST lvs_grow_clean 00:07:31.376 ************************************ 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.637 ************************************ 00:07:31.637 START TEST lvs_grow_dirty 00:07:31.637 ************************************ 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:31.637 13:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:31.637 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:31.637 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:31.897 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:31.897 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:31.897 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:32.158 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:32.158 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:32.158 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 lvol 150 00:07:32.158 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=087d512c-5c12-4dd4-99e3-0a72cc43d555 00:07:32.158 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:32.158 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:32.418 [2024-11-06 13:31:55.671947] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:32.418 [2024-11-06 13:31:55.672002] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:32.418 true 00:07:32.418 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:32.418 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:32.679 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:32.679 13:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:32.679 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 087d512c-5c12-4dd4-99e3-0a72cc43d555 00:07:32.939 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:33.199 [2024-11-06 13:31:56.350020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=459408 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 459408 /var/tmp/bdevperf.sock 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 459408 ']' 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:33.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:33.200 13:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:33.459 [2024-11-06 13:31:56.579842] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:07:33.459 [2024-11-06 13:31:56.579894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459408 ] 00:07:33.459 [2024-11-06 13:31:56.667046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.459 [2024-11-06 13:31:56.703092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.029 13:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:34.029 13:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:34.029 13:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:34.600 Nvme0n1 00:07:34.600 13:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:34.600 [ 00:07:34.600 { 00:07:34.600 "name": "Nvme0n1", 00:07:34.600 "aliases": [ 00:07:34.600 "087d512c-5c12-4dd4-99e3-0a72cc43d555" 00:07:34.600 ], 00:07:34.600 "product_name": "NVMe disk", 00:07:34.600 "block_size": 4096, 00:07:34.600 "num_blocks": 38912, 00:07:34.600 "uuid": "087d512c-5c12-4dd4-99e3-0a72cc43d555", 00:07:34.600 "numa_id": 0, 00:07:34.600 "assigned_rate_limits": { 00:07:34.600 "rw_ios_per_sec": 0, 00:07:34.600 "rw_mbytes_per_sec": 0, 00:07:34.600 "r_mbytes_per_sec": 0, 00:07:34.600 "w_mbytes_per_sec": 0 00:07:34.600 }, 00:07:34.600 "claimed": false, 00:07:34.600 "zoned": false, 00:07:34.600 "supported_io_types": { 00:07:34.600 "read": true, 00:07:34.600 "write": true, 00:07:34.600 "unmap": true, 00:07:34.600 "flush": true, 00:07:34.600 "reset": true, 00:07:34.600 "nvme_admin": true, 00:07:34.600 "nvme_io": true, 00:07:34.600 "nvme_io_md": false, 00:07:34.600 "write_zeroes": true, 00:07:34.600 "zcopy": false, 00:07:34.600 "get_zone_info": false, 00:07:34.600 "zone_management": false, 00:07:34.600 "zone_append": false, 00:07:34.600 "compare": true, 00:07:34.600 "compare_and_write": true, 00:07:34.600 "abort": true, 00:07:34.600 "seek_hole": false, 00:07:34.600 "seek_data": false, 00:07:34.600 "copy": true, 00:07:34.600 "nvme_iov_md": false 00:07:34.600 }, 00:07:34.600 "memory_domains": [ 00:07:34.600 { 00:07:34.600 "dma_device_id": "system", 00:07:34.600 "dma_device_type": 1 00:07:34.600 } 00:07:34.600 ], 00:07:34.600 "driver_specific": { 00:07:34.600 "nvme": [ 00:07:34.600 { 00:07:34.600 "trid": { 00:07:34.600 "trtype": "TCP", 00:07:34.600 "adrfam": "IPv4", 00:07:34.600 "traddr": "10.0.0.2", 00:07:34.600 "trsvcid": "4420", 00:07:34.600 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:34.600 }, 00:07:34.600 "ctrlr_data": { 00:07:34.600 "cntlid": 1, 00:07:34.600 "vendor_id": "0x8086", 00:07:34.600 "model_number": "SPDK bdev Controller", 00:07:34.600 "serial_number": "SPDK0", 00:07:34.600 "firmware_revision": "25.01", 00:07:34.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.600 "oacs": { 00:07:34.600 "security": 0, 00:07:34.600 "format": 0, 00:07:34.600 "firmware": 0, 00:07:34.600 "ns_manage": 0 00:07:34.600 }, 00:07:34.600 "multi_ctrlr": true, 00:07:34.600 "ana_reporting": false 00:07:34.600 }, 00:07:34.600 "vs": { 00:07:34.600 "nvme_version": "1.3" 00:07:34.600 }, 00:07:34.600 "ns_data": { 00:07:34.600 "id": 1, 00:07:34.600 "can_share": true 00:07:34.600 } 00:07:34.600 } 00:07:34.600 ], 00:07:34.600 "mp_policy": "active_passive" 00:07:34.600 } 00:07:34.600 } 00:07:34.600 ] 00:07:34.600 13:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=459744 00:07:34.600 13:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:34.600 13:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:34.600 Running I/O for 10 seconds... 00:07:35.983 Latency(us) 00:07:35.983 [2024-11-06T12:31:59.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.983 Nvme0n1 : 1.00 17847.00 69.71 0.00 0.00 0.00 0.00 0.00 00:07:35.983 [2024-11-06T12:31:59.359Z] =================================================================================================================== 00:07:35.983 [2024-11-06T12:31:59.359Z] Total : 17847.00 69.71 0.00 0.00 0.00 0.00 0.00 00:07:35.983 00:07:36.553 13:31:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:36.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.813 Nvme0n1 : 2.00 17970.00 70.20 0.00 0.00 0.00 0.00 0.00 00:07:36.813 [2024-11-06T12:32:00.189Z] =================================================================================================================== 00:07:36.813 [2024-11-06T12:32:00.189Z] Total : 17970.00 70.20 0.00 0.00 0.00 0.00 0.00 00:07:36.813 00:07:36.813 true 00:07:36.813 13:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:36.813 13:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:37.073 13:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:37.073 13:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:37.073 13:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 459744 00:07:37.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.643 Nvme0n1 : 3.00 17996.67 70.30 0.00 0.00 0.00 0.00 0.00 00:07:37.643 [2024-11-06T12:32:01.019Z] =================================================================================================================== 00:07:37.643 [2024-11-06T12:32:01.019Z] Total : 17996.67 70.30 0.00 0.00 0.00 0.00 0.00 00:07:37.643 00:07:39.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.026 Nvme0n1 : 4.00 18037.00 70.46 0.00 0.00 0.00 0.00 0.00 00:07:39.026 [2024-11-06T12:32:02.402Z] =================================================================================================================== 00:07:39.026 [2024-11-06T12:32:02.402Z] Total : 18037.00 70.46 0.00 0.00 0.00 0.00 0.00 00:07:39.026 00:07:39.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.968 Nvme0n1 : 5.00 18068.40 70.58 0.00 0.00 0.00 0.00 0.00 00:07:39.968 [2024-11-06T12:32:03.344Z] =================================================================================================================== 00:07:39.968 [2024-11-06T12:32:03.344Z] Total : 18068.40 70.58 0.00 0.00 0.00 0.00 0.00 00:07:39.968 00:07:40.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.909 Nvme0n1 : 6.00 18100.67 70.71 0.00 0.00 0.00 0.00 0.00 00:07:40.909 [2024-11-06T12:32:04.285Z] =================================================================================================================== 00:07:40.909 [2024-11-06T12:32:04.285Z] Total : 18100.67 70.71 0.00 0.00 0.00 0.00 0.00 00:07:40.909 00:07:41.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.849 Nvme0n1 : 7.00 18106.00 70.73 0.00 0.00 0.00 0.00 0.00 00:07:41.849 [2024-11-06T12:32:05.225Z] =================================================================================================================== 00:07:41.849 [2024-11-06T12:32:05.225Z] Total : 18106.00 70.73 0.00 0.00 0.00 0.00 0.00 00:07:41.849 00:07:42.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.790 Nvme0n1 : 8.00 18126.00 70.80 0.00 0.00 0.00 0.00 0.00 00:07:42.790 [2024-11-06T12:32:06.166Z] =================================================================================================================== 00:07:42.790 [2024-11-06T12:32:06.166Z] Total : 18126.00 70.80 0.00 0.00 0.00 0.00 0.00 00:07:42.790 00:07:43.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.731 Nvme0n1 : 9.00 18140.67 70.86 0.00 0.00 0.00 0.00 0.00 00:07:43.731 [2024-11-06T12:32:07.107Z] =================================================================================================================== 00:07:43.731 [2024-11-06T12:32:07.107Z] Total : 18140.67 70.86 0.00 0.00 0.00 0.00 0.00 00:07:43.731 00:07:44.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.671 Nvme0n1 : 10.00 18145.50 70.88 0.00 0.00 0.00 0.00 0.00 00:07:44.671 [2024-11-06T12:32:08.047Z] =================================================================================================================== 00:07:44.671 [2024-11-06T12:32:08.047Z] Total : 18145.50 70.88 0.00 0.00 0.00 0.00 0.00 00:07:44.671 00:07:44.671 00:07:44.671 Latency(us) 00:07:44.671 [2024-11-06T12:32:08.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.671 Nvme0n1 : 10.00 18154.12 70.91 0.00 0.00 7048.72 2061.65 13271.04 00:07:44.671 [2024-11-06T12:32:08.047Z] =================================================================================================================== 00:07:44.671 [2024-11-06T12:32:08.047Z] Total : 18154.12 70.91 0.00 0.00 7048.72 2061.65 13271.04 00:07:44.671 { 00:07:44.671 "results": [ 00:07:44.671 { 00:07:44.671 "job": "Nvme0n1", 00:07:44.671 "core_mask": "0x2", 00:07:44.671 "workload": "randwrite", 00:07:44.671 "status": "finished", 00:07:44.671 "queue_depth": 128, 00:07:44.671 "io_size": 4096, 00:07:44.671 "runtime": 10.0023, 00:07:44.671 "iops": 18154.12455135319, 00:07:44.671 "mibps": 70.9145490287234, 00:07:44.671 "io_failed": 0, 00:07:44.671 "io_timeout": 0, 00:07:44.671 "avg_latency_us": 7048.718020225829, 00:07:44.671 "min_latency_us": 2061.653333333333, 00:07:44.671 "max_latency_us": 13271.04 00:07:44.671 } 00:07:44.671 ], 00:07:44.671 "core_count": 1 00:07:44.671 } 00:07:44.671 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 459408 00:07:44.671 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 459408 ']' 00:07:44.671 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 459408 00:07:44.671 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:44.671 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:44.671 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 459408 00:07:44.931 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:44.931 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:44.931 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 459408' 00:07:44.931 killing process with pid 459408 00:07:44.931 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 459408 00:07:44.931 Received shutdown signal, test time was about 10.000000 seconds 00:07:44.931 00:07:44.931 Latency(us) 00:07:44.931 [2024-11-06T12:32:08.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.931 [2024-11-06T12:32:08.307Z] =================================================================================================================== 00:07:44.931 [2024-11-06T12:32:08.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:44.931 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 459408 00:07:44.931 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.191 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:45.191 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:45.191 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 455600 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 455600 00:07:45.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 455600 Killed "${NVMF_APP[@]}" "$@" 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=461782 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 461782 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 461782 ']' 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:45.452 13:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:45.712 [2024-11-06 13:32:08.837247] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:07:45.712 [2024-11-06 13:32:08.837296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.712 [2024-11-06 13:32:08.913790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.712 [2024-11-06 13:32:08.948427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.712 [2024-11-06 13:32:08.948462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.712 [2024-11-06 13:32:08.948471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.712 [2024-11-06 13:32:08.948477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.712 [2024-11-06 13:32:08.948483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.712 [2024-11-06 13:32:08.949048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.283 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.283 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:46.283 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:46.283 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.283 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.543 [2024-11-06 13:32:09.831704] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:46.543 [2024-11-06 13:32:09.831805] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:46.543 [2024-11-06 13:32:09.831836] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 087d512c-5c12-4dd4-99e3-0a72cc43d555 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=087d512c-5c12-4dd4-99e3-0a72cc43d555 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:46.543 13:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:46.803 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 087d512c-5c12-4dd4-99e3-0a72cc43d555 -t 2000 00:07:46.803 [ 00:07:46.803 { 00:07:46.803 "name": "087d512c-5c12-4dd4-99e3-0a72cc43d555", 00:07:46.803 "aliases": [ 00:07:46.803 "lvs/lvol" 00:07:46.803 ], 00:07:46.803 "product_name": "Logical Volume", 00:07:46.803 "block_size": 4096, 00:07:46.803 "num_blocks": 38912, 00:07:46.803 "uuid": "087d512c-5c12-4dd4-99e3-0a72cc43d555", 00:07:46.803 "assigned_rate_limits": { 00:07:46.803 "rw_ios_per_sec": 0, 00:07:46.803 "rw_mbytes_per_sec": 0, 00:07:46.803 "r_mbytes_per_sec": 0, 00:07:46.803 "w_mbytes_per_sec": 0 00:07:46.803 }, 00:07:46.803 "claimed": false, 00:07:46.803 "zoned": false, 00:07:46.803 "supported_io_types": { 00:07:46.803 "read": true, 00:07:46.803 "write": true, 00:07:46.803 "unmap": true, 00:07:46.803 "flush": false, 00:07:46.803 "reset": true, 00:07:46.803 "nvme_admin": false, 00:07:46.803 "nvme_io": false, 00:07:46.803 "nvme_io_md": false, 00:07:46.803 "write_zeroes": true, 00:07:46.803 "zcopy": false, 00:07:46.803 "get_zone_info": false, 00:07:46.803 "zone_management": false, 00:07:46.803 "zone_append": false, 00:07:46.803 "compare": false, 00:07:46.803 "compare_and_write": false, 00:07:46.803 "abort": false, 00:07:46.803 "seek_hole": true, 00:07:46.803 "seek_data": true, 00:07:46.803 "copy": false, 00:07:46.803 "nvme_iov_md": false 00:07:46.803 }, 00:07:46.803 "driver_specific": { 00:07:46.803 "lvol": { 00:07:46.803 "lvol_store_uuid": "04ac27d0-80ce-4edf-b554-b48cba4bc210", 00:07:46.803 "base_bdev": "aio_bdev", 00:07:46.803 "thin_provision": false, 00:07:46.803 "num_allocated_clusters": 38, 00:07:46.803 "snapshot": false, 00:07:46.803 "clone": false, 00:07:46.803 "esnap_clone": false 00:07:46.803 } 00:07:46.803 } 00:07:46.803 } 00:07:46.803 ] 00:07:47.063 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:47.064 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:47.064 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:47.064 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:47.064 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:47.064 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:47.323 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:47.323 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.323 [2024-11-06 13:32:10.696036] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:47.583 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:47.583 request: 00:07:47.583 { 00:07:47.583 "uuid": "04ac27d0-80ce-4edf-b554-b48cba4bc210", 00:07:47.583 "method": "bdev_lvol_get_lvstores", 00:07:47.583 "req_id": 1 00:07:47.583 } 00:07:47.583 Got JSON-RPC error response 00:07:47.583 response: 00:07:47.583 { 00:07:47.583 "code": -19, 00:07:47.583 "message": "No such device" 00:07:47.583 } 00:07:47.584 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:47.584 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.584 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.584 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.584 13:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.844 aio_bdev 00:07:47.844 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 087d512c-5c12-4dd4-99e3-0a72cc43d555 00:07:47.844 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=087d512c-5c12-4dd4-99e3-0a72cc43d555 00:07:47.844 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:47.844 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:47.844 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:47.844 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:47.844 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:48.105 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 087d512c-5c12-4dd4-99e3-0a72cc43d555 -t 2000 00:07:48.105 [ 00:07:48.105 { 00:07:48.105 "name": "087d512c-5c12-4dd4-99e3-0a72cc43d555", 00:07:48.105 "aliases": [ 00:07:48.105 "lvs/lvol" 00:07:48.105 ], 00:07:48.105 "product_name": "Logical Volume", 00:07:48.105 "block_size": 4096, 00:07:48.105 "num_blocks": 38912, 00:07:48.105 "uuid": "087d512c-5c12-4dd4-99e3-0a72cc43d555", 00:07:48.105 "assigned_rate_limits": { 00:07:48.105 "rw_ios_per_sec": 0, 00:07:48.105 "rw_mbytes_per_sec": 0, 00:07:48.105 "r_mbytes_per_sec": 0, 00:07:48.105 "w_mbytes_per_sec": 0 00:07:48.105 }, 00:07:48.105 "claimed": false, 00:07:48.105 "zoned": false, 00:07:48.105 "supported_io_types": { 00:07:48.105 "read": true, 00:07:48.105 "write": true, 00:07:48.105 "unmap": true, 00:07:48.105 "flush": false, 00:07:48.105 "reset": true, 00:07:48.105 "nvme_admin": false, 00:07:48.105 "nvme_io": false, 00:07:48.105 "nvme_io_md": false, 00:07:48.105 "write_zeroes": true, 00:07:48.105 "zcopy": false, 00:07:48.105 "get_zone_info": false, 00:07:48.105 "zone_management": false, 00:07:48.105 "zone_append": false, 00:07:48.105 "compare": false, 00:07:48.105 "compare_and_write": false, 00:07:48.105 "abort": false, 00:07:48.105 "seek_hole": true, 00:07:48.105 "seek_data": true, 00:07:48.105 "copy": false, 00:07:48.105 "nvme_iov_md": false 00:07:48.105 }, 00:07:48.105 "driver_specific": { 00:07:48.105 "lvol": { 00:07:48.105 "lvol_store_uuid": "04ac27d0-80ce-4edf-b554-b48cba4bc210", 00:07:48.105 "base_bdev": "aio_bdev", 00:07:48.105 "thin_provision": false, 00:07:48.105 "num_allocated_clusters": 38, 00:07:48.105 "snapshot": false, 00:07:48.105 "clone": false, 00:07:48.105 "esnap_clone": false 00:07:48.105 } 00:07:48.105 } 00:07:48.105 } 00:07:48.105 ] 00:07:48.105 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:48.105 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:48.105 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:48.365 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:48.366 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:48.366 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:48.630 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:48.630 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 087d512c-5c12-4dd4-99e3-0a72cc43d555 00:07:48.630 13:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04ac27d0-80ce-4edf-b554-b48cba4bc210 00:07:48.890 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.150 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:49.150 00:07:49.150 real 0m17.501s 00:07:49.150 user 0m45.677s 00:07:49.150 sys 0m2.922s 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 ************************************ 00:07:49.151 END TEST lvs_grow_dirty 00:07:49.151 ************************************ 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:49.151 nvmf_trace.0 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:49.151 rmmod nvme_tcp 00:07:49.151 rmmod nvme_fabrics 00:07:49.151 rmmod nvme_keyring 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 461782 ']' 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 461782 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 461782 ']' 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 461782 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:49.151 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 461782 00:07:49.411 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:49.411 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:49.411 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 461782' 00:07:49.412 killing process with pid 461782 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 461782 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 461782 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.412 13:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.955 00:07:51.955 real 0m44.305s 00:07:51.955 user 1m7.432s 00:07:51.955 sys 0m10.167s 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.955 ************************************ 00:07:51.955 END TEST nvmf_lvs_grow 00:07:51.955 ************************************ 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.955 ************************************ 00:07:51.955 START TEST nvmf_bdev_io_wait 00:07:51.955 ************************************ 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:51.955 * Looking for test storage... 00:07:51.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.955 13:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.955 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.955 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.955 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.955 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.955 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.955 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.955 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.956 --rc genhtml_branch_coverage=1 00:07:51.956 --rc genhtml_function_coverage=1 00:07:51.956 --rc genhtml_legend=1 00:07:51.956 --rc geninfo_all_blocks=1 00:07:51.956 --rc geninfo_unexecuted_blocks=1 00:07:51.956 00:07:51.956 ' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.956 --rc genhtml_branch_coverage=1 00:07:51.956 --rc genhtml_function_coverage=1 00:07:51.956 --rc genhtml_legend=1 00:07:51.956 --rc geninfo_all_blocks=1 00:07:51.956 --rc geninfo_unexecuted_blocks=1 00:07:51.956 00:07:51.956 ' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.956 --rc genhtml_branch_coverage=1 00:07:51.956 --rc genhtml_function_coverage=1 00:07:51.956 --rc genhtml_legend=1 00:07:51.956 --rc geninfo_all_blocks=1 00:07:51.956 --rc geninfo_unexecuted_blocks=1 00:07:51.956 00:07:51.956 ' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.956 --rc genhtml_branch_coverage=1 00:07:51.956 --rc genhtml_function_coverage=1 00:07:51.956 --rc genhtml_legend=1 00:07:51.956 --rc geninfo_all_blocks=1 00:07:51.956 --rc geninfo_unexecuted_blocks=1 00:07:51.956 00:07:51.956 ' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.956 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.957 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.957 13:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:00.096 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:00.096 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:00.096 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:00.096 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.096 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:08:00.097 00:08:00.097 --- 10.0.0.2 ping statistics --- 00:08:00.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.097 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:08:00.097 00:08:00.097 --- 10.0.0.1 ping statistics --- 00:08:00.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.097 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=466856 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 466856 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 466856 ']' 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.097 13:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.097 [2024-11-06 13:32:22.510472] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:00.097 [2024-11-06 13:32:22.510526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.097 [2024-11-06 13:32:22.588633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.097 [2024-11-06 13:32:22.627845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.097 [2024-11-06 13:32:22.627878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.097 [2024-11-06 13:32:22.627886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.097 [2024-11-06 13:32:22.627892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.097 [2024-11-06 13:32:22.627898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.097 [2024-11-06 13:32:22.629446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.097 [2024-11-06 13:32:22.629572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.097 [2024-11-06 13:32:22.629728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.097 [2024-11-06 13:32:22.629729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.097 [2024-11-06 13:32:23.422034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.097 Malloc0 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.097 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.359 [2024-11-06 13:32:23.481213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=467204 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=467206 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.359 { 00:08:00.359 "params": { 00:08:00.359 "name": "Nvme$subsystem", 00:08:00.359 "trtype": "$TEST_TRANSPORT", 00:08:00.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.359 "adrfam": "ipv4", 00:08:00.359 "trsvcid": "$NVMF_PORT", 00:08:00.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.359 "hdgst": ${hdgst:-false}, 00:08:00.359 "ddgst": ${ddgst:-false} 00:08:00.359 }, 00:08:00.359 "method": "bdev_nvme_attach_controller" 00:08:00.359 } 00:08:00.359 EOF 00:08:00.359 )") 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=467208 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=467211 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.359 { 00:08:00.359 "params": { 00:08:00.359 "name": "Nvme$subsystem", 00:08:00.359 "trtype": "$TEST_TRANSPORT", 00:08:00.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.359 "adrfam": "ipv4", 00:08:00.359 "trsvcid": "$NVMF_PORT", 00:08:00.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.359 "hdgst": ${hdgst:-false}, 00:08:00.359 "ddgst": ${ddgst:-false} 00:08:00.359 }, 00:08:00.359 "method": "bdev_nvme_attach_controller" 00:08:00.359 } 00:08:00.359 EOF 00:08:00.359 )") 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.359 { 00:08:00.359 "params": { 00:08:00.359 "name": "Nvme$subsystem", 00:08:00.359 "trtype": "$TEST_TRANSPORT", 00:08:00.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.359 "adrfam": "ipv4", 00:08:00.359 "trsvcid": "$NVMF_PORT", 00:08:00.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.359 "hdgst": ${hdgst:-false}, 00:08:00.359 "ddgst": ${ddgst:-false} 00:08:00.359 }, 00:08:00.359 "method": "bdev_nvme_attach_controller" 00:08:00.359 } 00:08:00.359 EOF 00:08:00.359 )") 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.359 { 00:08:00.359 "params": { 00:08:00.359 "name": "Nvme$subsystem", 00:08:00.359 "trtype": "$TEST_TRANSPORT", 00:08:00.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.359 "adrfam": "ipv4", 00:08:00.359 "trsvcid": "$NVMF_PORT", 00:08:00.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.359 "hdgst": ${hdgst:-false}, 00:08:00.359 "ddgst": ${ddgst:-false} 00:08:00.359 }, 00:08:00.359 "method": "bdev_nvme_attach_controller" 00:08:00.359 } 00:08:00.359 EOF 00:08:00.359 )") 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 467204 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.359 "params": { 00:08:00.359 "name": "Nvme1", 00:08:00.359 "trtype": "tcp", 00:08:00.359 "traddr": "10.0.0.2", 00:08:00.359 "adrfam": "ipv4", 00:08:00.359 "trsvcid": "4420", 00:08:00.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:00.359 "hdgst": false, 00:08:00.359 "ddgst": false 00:08:00.359 }, 00:08:00.359 "method": "bdev_nvme_attach_controller" 00:08:00.359 }' 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.359 "params": { 00:08:00.359 "name": "Nvme1", 00:08:00.359 "trtype": "tcp", 00:08:00.359 "traddr": "10.0.0.2", 00:08:00.359 "adrfam": "ipv4", 00:08:00.359 "trsvcid": "4420", 00:08:00.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:00.359 "hdgst": false, 00:08:00.359 "ddgst": false 00:08:00.359 }, 00:08:00.359 "method": "bdev_nvme_attach_controller" 00:08:00.359 }' 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.359 "params": { 00:08:00.359 "name": "Nvme1", 00:08:00.359 "trtype": "tcp", 00:08:00.359 "traddr": "10.0.0.2", 00:08:00.359 "adrfam": "ipv4", 00:08:00.359 "trsvcid": "4420", 00:08:00.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:00.359 "hdgst": false, 00:08:00.359 "ddgst": false 00:08:00.359 }, 00:08:00.359 "method": "bdev_nvme_attach_controller" 00:08:00.359 }' 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:00.359 13:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.359 "params": { 00:08:00.359 "name": "Nvme1", 00:08:00.359 "trtype": "tcp", 00:08:00.360 "traddr": "10.0.0.2", 00:08:00.360 "adrfam": "ipv4", 00:08:00.360 "trsvcid": "4420", 00:08:00.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:00.360 "hdgst": false, 00:08:00.360 "ddgst": false 00:08:00.360 }, 00:08:00.360 "method": "bdev_nvme_attach_controller" 00:08:00.360 }' 00:08:00.360 [2024-11-06 13:32:23.537199] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:00.360 [2024-11-06 13:32:23.537255] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:00.360 [2024-11-06 13:32:23.537779] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:00.360 [2024-11-06 13:32:23.537827] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:00.360 [2024-11-06 13:32:23.538318] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:00.360 [2024-11-06 13:32:23.538365] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:00.360 [2024-11-06 13:32:23.541751] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:00.360 [2024-11-06 13:32:23.541797] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:00.360 [2024-11-06 13:32:23.697749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.360 [2024-11-06 13:32:23.726392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:00.620 [2024-11-06 13:32:23.756864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.620 [2024-11-06 13:32:23.786084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:00.620 [2024-11-06 13:32:23.805361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.620 [2024-11-06 13:32:23.833908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:00.620 [2024-11-06 13:32:23.854424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.620 [2024-11-06 13:32:23.882202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:00.620 Running I/O for 1 seconds... 00:08:00.881 Running I/O for 1 seconds... 00:08:00.881 Running I/O for 1 seconds... 00:08:00.881 Running I/O for 1 seconds... 00:08:01.822 18527.00 IOPS, 72.37 MiB/s 00:08:01.822 Latency(us) 00:08:01.822 [2024-11-06T12:32:25.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.822 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:01.822 Nvme1n1 : 1.01 18593.65 72.63 0.00 0.00 6864.45 3372.37 15182.51 00:08:01.822 [2024-11-06T12:32:25.198Z] =================================================================================================================== 00:08:01.822 [2024-11-06T12:32:25.198Z] Total : 18593.65 72.63 0.00 0.00 6864.45 3372.37 15182.51 00:08:01.822 188272.00 IOPS, 735.44 MiB/s 00:08:01.822 Latency(us) 00:08:01.822 [2024-11-06T12:32:25.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.822 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:01.822 Nvme1n1 : 1.00 187899.50 733.98 0.00 0.00 677.26 302.08 1966.08 00:08:01.822 [2024-11-06T12:32:25.198Z] =================================================================================================================== 00:08:01.822 [2024-11-06T12:32:25.198Z] Total : 187899.50 733.98 0.00 0.00 677.26 302.08 1966.08 00:08:01.822 13094.00 IOPS, 51.15 MiB/s [2024-11-06T12:32:25.198Z] 11456.00 IOPS, 44.75 MiB/s 00:08:01.822 Latency(us) 00:08:01.822 [2024-11-06T12:32:25.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.822 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:01.822 Nvme1n1 : 1.01 13166.31 51.43 0.00 0.00 9692.13 2102.61 14745.60 00:08:01.822 [2024-11-06T12:32:25.198Z] =================================================================================================================== 00:08:01.822 [2024-11-06T12:32:25.198Z] Total : 13166.31 51.43 0.00 0.00 9692.13 2102.61 14745.60 00:08:01.822 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 467206 00:08:01.822 00:08:01.822 Latency(us) 00:08:01.822 [2024-11-06T12:32:25.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.822 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:01.822 Nvme1n1 : 1.01 11513.29 44.97 0.00 0.00 11079.32 4724.05 18786.99 00:08:01.822 [2024-11-06T12:32:25.198Z] =================================================================================================================== 00:08:01.822 [2024-11-06T12:32:25.198Z] Total : 11513.29 44.97 0.00 0.00 11079.32 4724.05 18786.99 00:08:01.822 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 467208 00:08:01.822 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 467211 00:08:01.822 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.822 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.822 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.083 rmmod nvme_tcp 00:08:02.083 rmmod nvme_fabrics 00:08:02.083 rmmod nvme_keyring 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 466856 ']' 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 466856 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 466856 ']' 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 466856 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 466856 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 466856' 00:08:02.083 killing process with pid 466856 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 466856 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 466856 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.083 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.352 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.352 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:02.352 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.352 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.352 13:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:04.386 00:08:04.386 real 0m12.695s 00:08:04.386 user 0m18.626s 00:08:04.386 sys 0m7.067s 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.386 ************************************ 00:08:04.386 END TEST nvmf_bdev_io_wait 00:08:04.386 ************************************ 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:04.386 ************************************ 00:08:04.386 START TEST nvmf_queue_depth 00:08:04.386 ************************************ 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:04.386 * Looking for test storage... 00:08:04.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:04.386 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.699 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.700 --rc genhtml_branch_coverage=1 00:08:04.700 --rc genhtml_function_coverage=1 00:08:04.700 --rc genhtml_legend=1 00:08:04.700 --rc geninfo_all_blocks=1 00:08:04.700 --rc geninfo_unexecuted_blocks=1 00:08:04.700 00:08:04.700 ' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.700 --rc genhtml_branch_coverage=1 00:08:04.700 --rc genhtml_function_coverage=1 00:08:04.700 --rc genhtml_legend=1 00:08:04.700 --rc geninfo_all_blocks=1 00:08:04.700 --rc geninfo_unexecuted_blocks=1 00:08:04.700 00:08:04.700 ' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.700 --rc genhtml_branch_coverage=1 00:08:04.700 --rc genhtml_function_coverage=1 00:08:04.700 --rc genhtml_legend=1 00:08:04.700 --rc geninfo_all_blocks=1 00:08:04.700 --rc geninfo_unexecuted_blocks=1 00:08:04.700 00:08:04.700 ' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.700 --rc genhtml_branch_coverage=1 00:08:04.700 --rc genhtml_function_coverage=1 00:08:04.700 --rc genhtml_legend=1 00:08:04.700 --rc geninfo_all_blocks=1 00:08:04.700 --rc geninfo_unexecuted_blocks=1 00:08:04.700 00:08:04.700 ' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:04.700 13:32:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:12.966 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:12.966 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.966 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:12.967 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:12.967 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:12.967 13:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:12.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:08:12.967 00:08:12.967 --- 10.0.0.2 ping statistics --- 00:08:12.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.967 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:08:12.967 00:08:12.967 --- 10.0.0.1 ping statistics --- 00:08:12.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.967 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=471794 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 471794 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 471794 ']' 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.967 13:32:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.967 [2024-11-06 13:32:35.227350] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:12.967 [2024-11-06 13:32:35.227415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.967 [2024-11-06 13:32:35.328346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.967 [2024-11-06 13:32:35.379319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.967 [2024-11-06 13:32:35.379387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.967 [2024-11-06 13:32:35.379396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.967 [2024-11-06 13:32:35.379403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.967 [2024-11-06 13:32:35.379409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.967 [2024-11-06 13:32:35.380179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.967 [2024-11-06 13:32:36.093611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.967 Malloc0 00:08:12.967 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.968 [2024-11-06 13:32:36.154805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=471955 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 471955 /var/tmp/bdevperf.sock 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 471955 ']' 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.968 13:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.968 [2024-11-06 13:32:36.212967] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:12.968 [2024-11-06 13:32:36.213037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471955 ] 00:08:12.968 [2024-11-06 13:32:36.288955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.968 [2024-11-06 13:32:36.330903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.908 13:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:13.908 13:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:13.908 13:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:13.908 13:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.908 13:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.908 NVMe0n1 00:08:13.908 13:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.908 13:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.908 Running I/O for 10 seconds... 00:08:16.236 8931.00 IOPS, 34.89 MiB/s [2024-11-06T12:32:40.553Z] 9457.50 IOPS, 36.94 MiB/s [2024-11-06T12:32:41.496Z] 10239.67 IOPS, 40.00 MiB/s [2024-11-06T12:32:42.437Z] 10503.25 IOPS, 41.03 MiB/s [2024-11-06T12:32:43.379Z] 10756.40 IOPS, 42.02 MiB/s [2024-11-06T12:32:44.321Z] 10919.17 IOPS, 42.65 MiB/s [2024-11-06T12:32:45.706Z] 10997.86 IOPS, 42.96 MiB/s [2024-11-06T12:32:46.648Z] 11120.00 IOPS, 43.44 MiB/s [2024-11-06T12:32:47.591Z] 11150.56 IOPS, 43.56 MiB/s [2024-11-06T12:32:47.591Z] 11231.30 IOPS, 43.87 MiB/s 00:08:24.215 Latency(us) 00:08:24.215 [2024-11-06T12:32:47.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.215 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:24.215 Verification LBA range: start 0x0 length 0x4000 00:08:24.215 NVMe0n1 : 10.06 11253.69 43.96 0.00 0.00 90607.52 19770.03 72526.51 00:08:24.215 [2024-11-06T12:32:47.591Z] =================================================================================================================== 00:08:24.215 [2024-11-06T12:32:47.591Z] Total : 11253.69 43.96 0.00 0.00 90607.52 19770.03 72526.51 00:08:24.215 { 00:08:24.215 "results": [ 00:08:24.215 { 00:08:24.215 "job": "NVMe0n1", 00:08:24.215 "core_mask": "0x1", 00:08:24.215 "workload": "verify", 00:08:24.215 "status": "finished", 00:08:24.215 "verify_range": { 00:08:24.215 "start": 0, 00:08:24.215 "length": 16384 00:08:24.215 }, 00:08:24.215 "queue_depth": 1024, 00:08:24.215 "io_size": 4096, 00:08:24.215 "runtime": 10.059808, 00:08:24.215 "iops": 11253.693907478155, 00:08:24.215 "mibps": 43.95974182608654, 00:08:24.215 "io_failed": 0, 00:08:24.215 "io_timeout": 0, 00:08:24.215 "avg_latency_us": 90607.5157798781, 00:08:24.215 "min_latency_us": 19770.02666666667, 00:08:24.215 "max_latency_us": 72526.50666666667 00:08:24.215 } 00:08:24.215 ], 00:08:24.215 "core_count": 1 00:08:24.215 } 00:08:24.215 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 471955 00:08:24.215 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 471955 ']' 00:08:24.215 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 471955 00:08:24.215 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:24.215 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 471955 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 471955' 00:08:24.216 killing process with pid 471955 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 471955 00:08:24.216 Received shutdown signal, test time was about 10.000000 seconds 00:08:24.216 00:08:24.216 Latency(us) 00:08:24.216 [2024-11-06T12:32:47.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.216 [2024-11-06T12:32:47.592Z] =================================================================================================================== 00:08:24.216 [2024-11-06T12:32:47.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 471955 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.216 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.216 rmmod nvme_tcp 00:08:24.216 rmmod nvme_fabrics 00:08:24.476 rmmod nvme_keyring 00:08:24.476 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.476 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:24.476 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:24.476 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 471794 ']' 00:08:24.476 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 471794 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 471794 ']' 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 471794 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 471794 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 471794' 00:08:24.477 killing process with pid 471794 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 471794 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 471794 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.477 13:32:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.027 13:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:27.027 00:08:27.027 real 0m22.261s 00:08:27.027 user 0m25.750s 00:08:27.027 sys 0m6.808s 00:08:27.027 13:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.027 13:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.027 ************************************ 00:08:27.027 END TEST nvmf_queue_depth 00:08:27.027 ************************************ 00:08:27.027 13:32:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:27.027 13:32:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:27.027 13:32:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.027 13:32:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.027 ************************************ 00:08:27.027 START TEST nvmf_target_multipath 00:08:27.027 ************************************ 00:08:27.027 13:32:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:27.027 * Looking for test storage... 00:08:27.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:27.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.027 --rc genhtml_branch_coverage=1 00:08:27.027 --rc genhtml_function_coverage=1 00:08:27.027 --rc genhtml_legend=1 00:08:27.027 --rc geninfo_all_blocks=1 00:08:27.027 --rc geninfo_unexecuted_blocks=1 00:08:27.027 00:08:27.027 ' 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:27.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.027 --rc genhtml_branch_coverage=1 00:08:27.027 --rc genhtml_function_coverage=1 00:08:27.027 --rc genhtml_legend=1 00:08:27.027 --rc geninfo_all_blocks=1 00:08:27.027 --rc geninfo_unexecuted_blocks=1 00:08:27.027 00:08:27.027 ' 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:27.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.027 --rc genhtml_branch_coverage=1 00:08:27.027 --rc genhtml_function_coverage=1 00:08:27.027 --rc genhtml_legend=1 00:08:27.027 --rc geninfo_all_blocks=1 00:08:27.027 --rc geninfo_unexecuted_blocks=1 00:08:27.027 00:08:27.027 ' 00:08:27.027 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:27.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.027 --rc genhtml_branch_coverage=1 00:08:27.027 --rc genhtml_function_coverage=1 00:08:27.027 --rc genhtml_legend=1 00:08:27.027 --rc geninfo_all_blocks=1 00:08:27.028 --rc geninfo_unexecuted_blocks=1 00:08:27.028 00:08:27.028 ' 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:27.028 13:32:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:35.170 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:35.170 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:35.170 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.170 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:35.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:35.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:08:35.171 00:08:35.171 --- 10.0.0.2 ping statistics --- 00:08:35.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.171 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:08:35.171 00:08:35.171 --- 10.0.0.1 ping statistics --- 00:08:35.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.171 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:35.171 only one NIC for nvmf test 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.171 rmmod nvme_tcp 00:08:35.171 rmmod nvme_fabrics 00:08:35.171 rmmod nvme_keyring 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.171 13:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:36.557 00:08:36.557 real 0m9.892s 00:08:36.557 user 0m2.150s 00:08:36.557 sys 0m5.676s 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:36.557 ************************************ 00:08:36.557 END TEST nvmf_target_multipath 00:08:36.557 ************************************ 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.557 13:32:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.818 ************************************ 00:08:36.818 START TEST nvmf_zcopy 00:08:36.818 ************************************ 00:08:36.818 13:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:36.818 * Looking for test storage... 00:08:36.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.818 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:36.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.819 --rc genhtml_branch_coverage=1 00:08:36.819 --rc genhtml_function_coverage=1 00:08:36.819 --rc genhtml_legend=1 00:08:36.819 --rc geninfo_all_blocks=1 00:08:36.819 --rc geninfo_unexecuted_blocks=1 00:08:36.819 00:08:36.819 ' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:36.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.819 --rc genhtml_branch_coverage=1 00:08:36.819 --rc genhtml_function_coverage=1 00:08:36.819 --rc genhtml_legend=1 00:08:36.819 --rc geninfo_all_blocks=1 00:08:36.819 --rc geninfo_unexecuted_blocks=1 00:08:36.819 00:08:36.819 ' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:36.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.819 --rc genhtml_branch_coverage=1 00:08:36.819 --rc genhtml_function_coverage=1 00:08:36.819 --rc genhtml_legend=1 00:08:36.819 --rc geninfo_all_blocks=1 00:08:36.819 --rc geninfo_unexecuted_blocks=1 00:08:36.819 00:08:36.819 ' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:36.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.819 --rc genhtml_branch_coverage=1 00:08:36.819 --rc genhtml_function_coverage=1 00:08:36.819 --rc genhtml_legend=1 00:08:36.819 --rc geninfo_all_blocks=1 00:08:36.819 --rc geninfo_unexecuted_blocks=1 00:08:36.819 00:08:36.819 ' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:36.819 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:36.820 13:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:44.968 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:44.968 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.968 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:44.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:44.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:08:44.969 00:08:44.969 --- 10.0.0.2 ping statistics --- 00:08:44.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.969 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:08:44.969 00:08:44.969 --- 10.0.0.1 ping statistics --- 00:08:44.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.969 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=482756 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 482756 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 482756 ']' 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.969 13:33:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.969 [2024-11-06 13:33:07.527136] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:44.969 [2024-11-06 13:33:07.527206] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.969 [2024-11-06 13:33:07.625200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.969 [2024-11-06 13:33:07.675247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.969 [2024-11-06 13:33:07.675298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.969 [2024-11-06 13:33:07.675307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.969 [2024-11-06 13:33:07.675314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.969 [2024-11-06 13:33:07.675320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.969 [2024-11-06 13:33:07.676088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.969 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:44.969 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:44.969 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.969 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:44.969 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 [2024-11-06 13:33:08.390399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 [2024-11-06 13:33:08.406640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 malloc0 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.232 { 00:08:45.232 "params": { 00:08:45.232 "name": "Nvme$subsystem", 00:08:45.232 "trtype": "$TEST_TRANSPORT", 00:08:45.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.232 "adrfam": "ipv4", 00:08:45.232 "trsvcid": "$NVMF_PORT", 00:08:45.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.232 "hdgst": ${hdgst:-false}, 00:08:45.232 "ddgst": ${ddgst:-false} 00:08:45.232 }, 00:08:45.232 "method": "bdev_nvme_attach_controller" 00:08:45.232 } 00:08:45.232 EOF 00:08:45.232 )") 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:45.232 13:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.232 "params": { 00:08:45.232 "name": "Nvme1", 00:08:45.232 "trtype": "tcp", 00:08:45.232 "traddr": "10.0.0.2", 00:08:45.232 "adrfam": "ipv4", 00:08:45.232 "trsvcid": "4420", 00:08:45.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.232 "hdgst": false, 00:08:45.232 "ddgst": false 00:08:45.232 }, 00:08:45.232 "method": "bdev_nvme_attach_controller" 00:08:45.232 }' 00:08:45.232 [2024-11-06 13:33:08.496317] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:45.232 [2024-11-06 13:33:08.496388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483099 ] 00:08:45.232 [2024-11-06 13:33:08.574638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.493 [2024-11-06 13:33:08.617444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.754 Running I/O for 10 seconds... 00:08:47.638 6649.00 IOPS, 51.95 MiB/s [2024-11-06T12:33:11.957Z] 6708.00 IOPS, 52.41 MiB/s [2024-11-06T12:33:13.343Z] 7457.67 IOPS, 58.26 MiB/s [2024-11-06T12:33:13.915Z] 8032.50 IOPS, 62.75 MiB/s [2024-11-06T12:33:15.300Z] 8377.20 IOPS, 65.45 MiB/s [2024-11-06T12:33:16.241Z] 8607.00 IOPS, 67.24 MiB/s [2024-11-06T12:33:17.183Z] 8774.57 IOPS, 68.55 MiB/s [2024-11-06T12:33:18.124Z] 8896.75 IOPS, 69.51 MiB/s [2024-11-06T12:33:19.066Z] 8993.22 IOPS, 70.26 MiB/s [2024-11-06T12:33:19.066Z] 9070.50 IOPS, 70.86 MiB/s 00:08:55.690 Latency(us) 00:08:55.690 [2024-11-06T12:33:19.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.690 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:55.690 Verification LBA range: start 0x0 length 0x1000 00:08:55.690 Nvme1n1 : 10.01 9074.66 70.90 0.00 0.00 14053.15 1843.20 26760.53 00:08:55.690 [2024-11-06T12:33:19.066Z] =================================================================================================================== 00:08:55.690 [2024-11-06T12:33:19.067Z] Total : 9074.66 70.90 0.00 0.00 14053.15 1843.20 26760.53 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=485578 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.691 [2024-11-06 13:33:19.040785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.691 [2024-11-06 13:33:19.040813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.691 { 00:08:55.691 "params": { 00:08:55.691 "name": "Nvme$subsystem", 00:08:55.691 "trtype": "$TEST_TRANSPORT", 00:08:55.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.691 "adrfam": "ipv4", 00:08:55.691 "trsvcid": "$NVMF_PORT", 00:08:55.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.691 "hdgst": ${hdgst:-false}, 00:08:55.691 "ddgst": ${ddgst:-false} 00:08:55.691 }, 00:08:55.691 "method": "bdev_nvme_attach_controller" 00:08:55.691 } 00:08:55.691 EOF 00:08:55.691 )") 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:55.691 [2024-11-06 13:33:19.048770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.691 [2024-11-06 13:33:19.048780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:55.691 13:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.691 "params": { 00:08:55.691 "name": "Nvme1", 00:08:55.691 "trtype": "tcp", 00:08:55.691 "traddr": "10.0.0.2", 00:08:55.691 "adrfam": "ipv4", 00:08:55.691 "trsvcid": "4420", 00:08:55.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.691 "hdgst": false, 00:08:55.691 "ddgst": false 00:08:55.691 }, 00:08:55.691 "method": "bdev_nvme_attach_controller" 00:08:55.691 }' 00:08:55.691 [2024-11-06 13:33:19.056787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.691 [2024-11-06 13:33:19.056795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.691 [2024-11-06 13:33:19.064808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.691 [2024-11-06 13:33:19.064816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.072828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.072836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.084861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.084868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.089423] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:08:55.952 [2024-11-06 13:33:19.089471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485578 ] 00:08:55.952 [2024-11-06 13:33:19.092880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.092889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.100900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.100908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.108921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.108928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.116942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.116949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.124962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.124970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.132983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.132991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.141004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.141012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.149025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.149034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.157045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.157052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.159475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.952 [2024-11-06 13:33:19.165065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.165074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.173086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.173094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.181107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.181115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.189127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.189137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.194802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.952 [2024-11-06 13:33:19.197148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.197157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.205170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.205179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.213193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.213203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.221210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.221220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.229231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.229246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.237252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.237262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.245272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.245282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.253291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.253298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.261310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.261317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.269331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.269338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.277363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.277379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.285378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.285387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.293397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.293405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.301418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.301429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.309440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.309449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.317461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.317471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.952 [2024-11-06 13:33:19.325481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.952 [2024-11-06 13:33:19.325488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.333510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.333525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.341522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.341530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 Running I/O for 5 seconds... 00:08:56.213 [2024-11-06 13:33:19.349542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.349550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.360824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.360841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.368930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.368946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.377560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.377576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.386062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.386078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.395043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.395059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.403533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.403549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.412409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.412425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.421524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.421539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.430581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.430597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.439791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.439807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.448236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.448251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.457036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.457051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.466152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.466168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.474830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.474855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.484190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.213 [2024-11-06 13:33:19.484205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.213 [2024-11-06 13:33:19.492833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.492848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.501592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.501608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.510321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.510337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.519197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.519212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.528244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.528259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.537091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.537106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.545584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.545599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.554430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.554446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.563258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.563273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.572272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.572287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.214 [2024-11-06 13:33:19.581243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.214 [2024-11-06 13:33:19.581259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.590329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.590345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.598965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.598980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.607883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.607899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.616445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.616460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.625742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.625763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.634783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.634799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.643594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.643609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.652632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.652648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.661204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.661219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.670132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.670147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.678533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.678548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.687384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.687400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.695614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.695629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.704310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.704326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.713509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.713527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.722289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.722304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.731118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.731133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.476 [2024-11-06 13:33:19.740007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.476 [2024-11-06 13:33:19.740023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.749151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.749166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.757830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.757845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.766259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.766274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.775056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.775071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.783396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.783411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.792160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.792176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.800945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.800960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.809927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.809942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.818471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.818486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.826524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.826540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.835500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.835516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.477 [2024-11-06 13:33:19.844290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.477 [2024-11-06 13:33:19.844306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.853087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.853103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.861649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.861664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.870000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.870014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.878674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.878694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.887629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.887644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.896653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.896668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.905320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.905335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.914228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.914244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.922715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.922731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.931492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.931507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.940467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.940482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.949520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.949535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.958213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.958228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.966900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.966916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.975762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.975777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.984415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.984430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:19.993558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:19.993573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.001467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.001482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.010714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.010730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.019832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.019848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.028502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.028517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.037489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.037505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.045853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.045880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.055305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.055320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.063733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.063753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.072815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.072831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.081437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.081453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.094241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.094256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.738 [2024-11-06 13:33:20.107915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.738 [2024-11-06 13:33:20.107931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.121635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.121651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.133940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.133956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.146741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.146764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.160281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.160297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.172728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.172743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.186260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.186275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.199824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.199840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.212831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.212846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.225215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.225230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.237985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.238001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.251457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.999 [2024-11-06 13:33:20.251472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.999 [2024-11-06 13:33:20.264810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.264826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 [2024-11-06 13:33:20.278045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.278065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 [2024-11-06 13:33:20.291511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.291526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 [2024-11-06 13:33:20.304316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.304332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 [2024-11-06 13:33:20.316868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.316882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 [2024-11-06 13:33:20.324681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.324696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 [2024-11-06 13:33:20.333484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.333499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 [2024-11-06 13:33:20.342863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.342878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 19021.00 IOPS, 148.60 MiB/s [2024-11-06T12:33:20.376Z] [2024-11-06 13:33:20.351464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.351480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 [2024-11-06 13:33:20.360816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.360831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.000 [2024-11-06 13:33:20.369896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.000 [2024-11-06 13:33:20.369911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.378964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.378979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.387411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.387426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.396457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.396472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.405392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.405407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.413957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.413971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.422428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.422443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.431767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.431782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.440393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.440408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.449284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.449298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.458651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.458666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.467187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.467202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.476501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.476517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.485009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.485024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.493525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.493539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.502437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.502451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.511037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.511053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.519884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.519899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.529021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.529037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.538086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.538102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.546637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.546652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.555563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.555578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.564545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.564560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.573637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.573651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.581996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.582011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.590408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.590422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.599665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.599680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.608613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.608628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.617311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.617325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.626394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.626409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.261 [2024-11-06 13:33:20.634837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.261 [2024-11-06 13:33:20.634852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.643531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.643546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.651545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.651559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.660579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.660596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.669744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.669763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.678901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.678916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.688075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.688090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.697314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.697329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.705353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.705368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.714114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.714129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.722850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.722865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.731834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.731849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.740356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.740371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.749548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.749563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.758208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.758223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.767331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.767346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.775999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.776014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.784603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.784618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.793369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.793384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.802578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.802593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.810497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.810511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.819617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.522 [2024-11-06 13:33:20.819632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.522 [2024-11-06 13:33:20.828075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.523 [2024-11-06 13:33:20.828090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.523 [2024-11-06 13:33:20.836684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.523 [2024-11-06 13:33:20.836699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.523 [2024-11-06 13:33:20.845366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.523 [2024-11-06 13:33:20.845381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.523 [2024-11-06 13:33:20.854411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.523 [2024-11-06 13:33:20.854426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.523 [2024-11-06 13:33:20.862854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.523 [2024-11-06 13:33:20.862869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.523 [2024-11-06 13:33:20.872016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.523 [2024-11-06 13:33:20.872031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.523 [2024-11-06 13:33:20.880519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.523 [2024-11-06 13:33:20.880534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.523 [2024-11-06 13:33:20.889432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.523 [2024-11-06 13:33:20.889446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.783 [2024-11-06 13:33:20.898116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.783 [2024-11-06 13:33:20.898130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.907053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.907068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.915590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.915605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.924257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.924272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.933272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.933287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.941667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.941681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.950728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.950751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.959233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.959248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.967986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.968001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.976920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.976935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.986023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.986038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:20.995229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:20.995243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.003233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.003248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.012377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.012392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.020914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.020928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.029566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.029580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.038022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.038037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.046649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.046664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.055481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.055496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.063476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.063491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.072491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.072506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.081322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.081336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.089966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.089981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.098774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.098789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.107023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.107038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.115653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.115673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.124319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.124334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.133330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.133345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.142335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.142350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.784 [2024-11-06 13:33:21.151092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.784 [2024-11-06 13:33:21.151106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.044 [2024-11-06 13:33:21.159638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.044 [2024-11-06 13:33:21.159653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.168455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.168470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.176317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.176332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.185669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.185684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.194360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.194374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.202744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.202762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.211556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.211571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.220389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.220404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.229502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.229517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.238237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.238251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.246920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.246935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.256003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.256018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.265167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.265181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.273740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.273757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.282743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.282765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.291393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.291408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.300293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.300308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.309062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.309077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.317454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.317468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.326391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.326406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.335132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.335147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.343360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.343376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 19112.50 IOPS, 149.32 MiB/s [2024-11-06T12:33:21.421Z] [2024-11-06 13:33:21.352801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.352817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.361964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.361980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.371142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.371158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.380127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.380141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.389212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.389227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.397702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.397716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.406042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.406057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.045 [2024-11-06 13:33:21.415167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.045 [2024-11-06 13:33:21.415182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.424265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.424280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.432215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.432230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.440846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.440861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.449454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.449469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.457759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.457774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.466832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.466847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.475888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.475903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.485003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.485018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.494221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.494236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.502866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.502881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.511410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.511426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.519688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.519703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.528561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.528576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.537352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.537367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.546621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.546637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.554662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.554677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.563372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.563388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.572588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.572604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.581067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.581082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.589614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.589629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.598287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.598302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.606924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.606939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.614934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.614949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.624085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.624100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.633059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.633075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.642169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.642185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.651023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.651038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.660355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.660370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.668970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.668985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.306 [2024-11-06 13:33:21.677829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.306 [2024-11-06 13:33:21.677844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.568 [2024-11-06 13:33:21.686660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.568 [2024-11-06 13:33:21.686675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.695862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.695877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.704980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.704996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.713808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.713823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.722519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.722534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.731652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.731667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.740543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.740558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.748953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.748968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.757463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.757478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.766103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.766118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.775136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.775151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.783489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.783504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.792736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.792755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.801396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.801411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.810696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.810711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.819232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.819247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.828148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.828163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.836661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.836676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.844955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.844971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.853764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.853779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.862423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.862438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.871675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.871690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.880037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.880052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.888677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.888692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.897579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.897595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.906545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.906560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.915741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.915761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.924172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.924187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.932835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.932850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.569 [2024-11-06 13:33:21.941364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.569 [2024-11-06 13:33:21.941380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.831 [2024-11-06 13:33:21.950196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.831 [2024-11-06 13:33:21.950212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.831 [2024-11-06 13:33:21.959353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.831 [2024-11-06 13:33:21.959368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.831 [2024-11-06 13:33:21.967990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.831 [2024-11-06 13:33:21.968005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.831 [2024-11-06 13:33:21.976986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.831 [2024-11-06 13:33:21.977001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.831 [2024-11-06 13:33:21.985461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.831 [2024-11-06 13:33:21.985476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.831 [2024-11-06 13:33:21.994431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:21.994446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.003011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.003026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.011843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.011858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.020884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.020900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.029424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.029439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.038477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.038492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.047339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.047353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.055958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.055973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.064530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.064545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.073322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.073337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.082468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.082483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.091627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.091642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.100020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.100034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.108999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.109017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.117909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.117924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.126442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.126457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.135295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.135309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.144086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.144101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.152675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.152690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.161019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.161034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.169813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.169828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.178335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.178351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.187478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.187493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.196635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.196650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.832 [2024-11-06 13:33:22.205723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.832 [2024-11-06 13:33:22.205738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.214682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.214697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.223366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.223381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.231822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.231836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.240198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.240213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.248984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.248998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.257574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.257588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.266383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.266397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.275324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.275342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.284149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.284163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.293225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.293240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.301774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.301789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.310686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.310701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.319336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.319351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.328318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.328332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.337017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.337031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.346111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.346125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 19158.67 IOPS, 149.68 MiB/s [2024-11-06T12:33:22.470Z] [2024-11-06 13:33:22.355524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.355539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.364120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.364134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.373347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.373362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.382005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.382020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.390686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.390700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.399163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.399177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.408469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.408484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.417655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.417670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.426250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.426264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.434728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.434743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.443347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.443365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.452530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.452545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.094 [2024-11-06 13:33:22.462028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.094 [2024-11-06 13:33:22.462043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.469960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.469975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.479078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.479093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.487054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.487068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.495720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.495734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.504872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.504886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.513736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.513757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.522487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.522502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.531358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.531372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.540770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.540785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.549226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.549241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.558072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.558087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.566735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.566753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.575628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.575643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.584143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.584158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.593155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.593170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.601727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.601743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.610540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.610555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.619489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.619503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.628255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.628269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.637358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.637373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.645853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.645869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.654830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.654845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.662800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.662815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.671902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.671916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.680493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.680508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.689144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.689159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.698262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.355 [2024-11-06 13:33:22.698277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.355 [2024-11-06 13:33:22.706816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.356 [2024-11-06 13:33:22.706830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.356 [2024-11-06 13:33:22.715811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.356 [2024-11-06 13:33:22.715826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.356 [2024-11-06 13:33:22.724305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.356 [2024-11-06 13:33:22.724320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.733138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.733153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.741645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.741660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.750549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.750563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.759741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.759762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.768476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.768490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.777169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.777184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.786537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.786552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.795146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.795160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.803868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.803883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.812727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.812742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.821591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.821606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.830131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.830146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.839151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.839165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.847696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.847711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.616 [2024-11-06 13:33:22.856401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.616 [2024-11-06 13:33:22.856416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.865183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.865197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.874163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.874178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.883078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.883093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.891593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.891607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.900392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.900407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.909265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.909280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.917734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.917753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.926046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.926061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.934581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.934596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.943815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.943830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.951650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.951664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.960454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.960468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.969838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.969852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.978427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.978441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.617 [2024-11-06 13:33:22.987250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.617 [2024-11-06 13:33:22.987265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:22.996389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:22.996403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.005115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.005129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.013647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.013662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.022267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.022282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.030176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.030191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.039574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.039590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.048173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.048188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.056959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.056974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.065533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.065548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.074286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.074302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.082885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.082900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.091444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.091459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.100606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.100622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.109551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.877 [2024-11-06 13:33:23.109567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.877 [2024-11-06 13:33:23.118470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.118486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.127812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.127827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.136966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.136981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.145299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.145314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.154132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.154147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.162710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.162725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.171613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.171628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.180850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.180865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.188793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.188808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.197480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.197495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.206553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.206568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.215682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.215697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.224045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.224060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.233178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.233194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.242383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.242399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.878 [2024-11-06 13:33:23.250844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.878 [2024-11-06 13:33:23.250859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.259558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.138 [2024-11-06 13:33:23.259573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.268439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.138 [2024-11-06 13:33:23.268457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.277506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.138 [2024-11-06 13:33:23.277522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.286443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.138 [2024-11-06 13:33:23.286458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.295289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.138 [2024-11-06 13:33:23.295304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.304486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.138 [2024-11-06 13:33:23.304501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.313874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.138 [2024-11-06 13:33:23.313889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.322591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.138 [2024-11-06 13:33:23.322606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.331504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.138 [2024-11-06 13:33:23.331519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.138 [2024-11-06 13:33:23.340433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.340448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.349087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.349102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 19180.50 IOPS, 149.85 MiB/s [2024-11-06T12:33:23.515Z] [2024-11-06 13:33:23.357716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.357731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.366788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.366803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.376093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.376108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.385162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.385177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.393120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.393134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.406453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.406468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.414608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.414623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.423293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.423308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.431686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.431701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.440832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.440850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.449379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.449394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.457919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.457934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.467046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.467061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.475589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.475604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.484150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.484165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.492888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.492903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.501581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.501595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.139 [2024-11-06 13:33:23.510487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.139 [2024-11-06 13:33:23.510502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.519173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.519188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.527809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.527825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.536512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.536526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.545176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.545191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.554276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.554291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.563249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.563264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.571900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.571916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.580705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.580720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.589894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.589909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.597717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.597732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.607049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.607068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.615557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.615573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.624179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.624194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.633570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.633585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.642665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.642680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.651323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.651338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.660313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.660329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.669073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.669088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.677245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.677261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.686182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.686197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.695240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.695256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.703713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.703728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.712711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.712726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.721572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.721587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.730290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.730305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.739493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.739509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.748361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.748376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.757311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.757325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.400 [2024-11-06 13:33:23.766249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.400 [2024-11-06 13:33:23.766264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.775089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.775105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.783467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.783482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.792192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.792206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.801512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.801527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.809910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.809925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.819023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.819039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.827868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.827882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.836330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.836345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.844862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.844877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.854032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.854046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.861982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.661 [2024-11-06 13:33:23.861997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.661 [2024-11-06 13:33:23.870902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.870917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.879959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.879973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.888643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.888658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.897526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.897541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.906111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.906127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.915200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.915214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.924324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.924339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.932684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.932699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.941553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.941567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.950475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.950490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.959693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.959707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.968718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.968733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.976670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.976686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.985319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.985333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:23.993391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:23.993406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:24.002665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:24.002680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:24.011750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:24.011764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:24.020927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:24.020942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-06 13:33:24.029885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-06 13:33:24.029900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.038973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.038988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.048151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.048166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.056761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.056775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.065500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.065516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.074156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.074171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.082517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.082531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.091376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.091390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.100064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.100078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.109033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.109047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.118344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.923 [2024-11-06 13:33:24.118359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.923 [2024-11-06 13:33:24.127279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.127293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.136592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.136607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.145186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.145201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.153893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.153909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.163053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.163068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.172188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.172203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.180766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.180781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.189544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.189558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.198359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.198374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.207487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.207502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.215926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.215940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.225321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.225337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.233925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.233940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.243174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.243189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.251225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.251239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.260338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.260353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.268777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.268796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.277454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.277468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.286370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.286385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.924 [2024-11-06 13:33:24.295323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.924 [2024-11-06 13:33:24.295338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.184 [2024-11-06 13:33:24.303752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.184 [2024-11-06 13:33:24.303768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.184 [2024-11-06 13:33:24.312577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.184 [2024-11-06 13:33:24.312592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.184 [2024-11-06 13:33:24.321229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.184 [2024-11-06 13:33:24.321245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.330263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.330278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.339467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.339482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.348157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.348172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.356988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.357003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 19191.20 IOPS, 149.93 MiB/s [2024-11-06T12:33:24.561Z] [2024-11-06 13:33:24.363024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.363038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 00:09:01.185 Latency(us) 00:09:01.185 [2024-11-06T12:33:24.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.185 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:01.185 Nvme1n1 : 5.01 19192.19 149.94 0.00 0.00 6662.89 2662.40 15510.19 00:09:01.185 [2024-11-06T12:33:24.561Z] =================================================================================================================== 00:09:01.185 [2024-11-06T12:33:24.561Z] Total : 19192.19 149.94 0.00 0.00 6662.89 2662.40 15510.19 00:09:01.185 [2024-11-06 13:33:24.371035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.371046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.379054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.379065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.387077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.387088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.395099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.395110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.403119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.403133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.411137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.411146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.419156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.419164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.427175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.427183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.435196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.435204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.443217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.443224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.451238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.451247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.459261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.459270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.467278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.467286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 [2024-11-06 13:33:24.475299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.185 [2024-11-06 13:33:24.475307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (485578) - No such process 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 485578 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.185 delay0 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.185 13:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:01.446 [2024-11-06 13:33:24.662966] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:09.582 Initializing NVMe Controllers 00:09:09.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:09.582 Initialization complete. Launching workers. 00:09:09.582 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 227, failed: 37986 00:09:09.582 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 38067, failed to submit 146 00:09:09.582 success 38002, unsuccessful 65, failed 0 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.582 rmmod nvme_tcp 00:09:09.582 rmmod nvme_fabrics 00:09:09.582 rmmod nvme_keyring 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 482756 ']' 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 482756 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 482756 ']' 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 482756 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 482756 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 482756' 00:09:09.582 killing process with pid 482756 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 482756 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 482756 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.582 13:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.969 00:09:10.969 real 0m34.098s 00:09:10.969 user 0m45.651s 00:09:10.969 sys 0m11.571s 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.969 ************************************ 00:09:10.969 END TEST nvmf_zcopy 00:09:10.969 ************************************ 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.969 ************************************ 00:09:10.969 START TEST nvmf_nmic 00:09:10.969 ************************************ 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:10.969 * Looking for test storage... 00:09:10.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:10.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.969 --rc genhtml_branch_coverage=1 00:09:10.969 --rc genhtml_function_coverage=1 00:09:10.969 --rc genhtml_legend=1 00:09:10.969 --rc geninfo_all_blocks=1 00:09:10.969 --rc geninfo_unexecuted_blocks=1 00:09:10.969 00:09:10.969 ' 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:10.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.969 --rc genhtml_branch_coverage=1 00:09:10.969 --rc genhtml_function_coverage=1 00:09:10.969 --rc genhtml_legend=1 00:09:10.969 --rc geninfo_all_blocks=1 00:09:10.969 --rc geninfo_unexecuted_blocks=1 00:09:10.969 00:09:10.969 ' 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:10.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.969 --rc genhtml_branch_coverage=1 00:09:10.969 --rc genhtml_function_coverage=1 00:09:10.969 --rc genhtml_legend=1 00:09:10.969 --rc geninfo_all_blocks=1 00:09:10.969 --rc geninfo_unexecuted_blocks=1 00:09:10.969 00:09:10.969 ' 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:10.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.969 --rc genhtml_branch_coverage=1 00:09:10.969 --rc genhtml_function_coverage=1 00:09:10.969 --rc genhtml_legend=1 00:09:10.969 --rc geninfo_all_blocks=1 00:09:10.969 --rc geninfo_unexecuted_blocks=1 00:09:10.969 00:09:10.969 ' 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.969 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.970 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.231 13:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:19.410 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:19.411 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:19.411 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:19.411 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:19.411 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:19.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:09:19.411 00:09:19.411 --- 10.0.0.2 ping statistics --- 00:09:19.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.411 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:09:19.411 00:09:19.411 --- 10.0.0.1 ping statistics --- 00:09:19.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.411 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:19.411 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=492272 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 492272 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 492272 ']' 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:19.412 13:33:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.412 [2024-11-06 13:33:41.938591] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:09:19.412 [2024-11-06 13:33:41.938643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.412 [2024-11-06 13:33:42.017321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:19.412 [2024-11-06 13:33:42.053923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.412 [2024-11-06 13:33:42.053958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.412 [2024-11-06 13:33:42.053966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.412 [2024-11-06 13:33:42.053973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.412 [2024-11-06 13:33:42.053978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.412 [2024-11-06 13:33:42.055477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.412 [2024-11-06 13:33:42.055589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.412 [2024-11-06 13:33:42.055744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.412 [2024-11-06 13:33:42.055744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.412 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.412 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:19.412 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:19.412 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:19.412 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.412 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.412 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:19.412 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.412 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.412 [2024-11-06 13:33:42.781412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.674 Malloc0 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.674 [2024-11-06 13:33:42.851075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:19.674 test case1: single bdev can't be used in multiple subsystems 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.674 [2024-11-06 13:33:42.886958] bdev.c:8194:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:19.674 [2024-11-06 13:33:42.886978] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:19.674 [2024-11-06 13:33:42.886986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.674 request: 00:09:19.674 { 00:09:19.674 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:19.674 "namespace": { 00:09:19.674 "bdev_name": "Malloc0", 00:09:19.674 "no_auto_visible": false 00:09:19.674 }, 00:09:19.674 "method": "nvmf_subsystem_add_ns", 00:09:19.674 "req_id": 1 00:09:19.674 } 00:09:19.674 Got JSON-RPC error response 00:09:19.674 response: 00:09:19.674 { 00:09:19.674 "code": -32602, 00:09:19.674 "message": "Invalid parameters" 00:09:19.674 } 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:19.674 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:19.674 Adding namespace failed - expected result. 00:09:19.675 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:19.675 test case2: host connect to nvmf target in multiple paths 00:09:19.675 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:19.675 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.675 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.675 [2024-11-06 13:33:42.899115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:19.675 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.675 13:33:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.058 13:33:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:22.970 13:33:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.970 13:33:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:22.970 13:33:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.970 13:33:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:22.970 13:33:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:24.882 13:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:24.882 13:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:24.882 13:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.882 13:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:24.882 13:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.882 13:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:24.882 13:33:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:24.882 [global] 00:09:24.882 thread=1 00:09:24.882 invalidate=1 00:09:24.882 rw=write 00:09:24.882 time_based=1 00:09:24.882 runtime=1 00:09:24.882 ioengine=libaio 00:09:24.882 direct=1 00:09:24.882 bs=4096 00:09:24.882 iodepth=1 00:09:24.882 norandommap=0 00:09:24.882 numjobs=1 00:09:24.882 00:09:24.882 verify_dump=1 00:09:24.882 verify_backlog=512 00:09:24.882 verify_state_save=0 00:09:24.882 do_verify=1 00:09:24.882 verify=crc32c-intel 00:09:24.882 [job0] 00:09:24.882 filename=/dev/nvme0n1 00:09:24.882 Could not set queue depth (nvme0n1) 00:09:25.143 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.143 fio-3.35 00:09:25.143 Starting 1 thread 00:09:26.087 00:09:26.087 job0: (groupid=0, jobs=1): err= 0: pid=493818: Wed Nov 6 13:33:49 2024 00:09:26.087 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:26.087 slat (nsec): min=26026, max=61059, avg=27560.83, stdev=3866.56 00:09:26.087 clat (usec): min=651, max=1226, avg=1013.21, stdev=68.24 00:09:26.087 lat (usec): min=678, max=1253, avg=1040.77, stdev=68.24 00:09:26.087 clat percentiles (usec): 00:09:26.087 | 1.00th=[ 832], 5.00th=[ 889], 10.00th=[ 922], 20.00th=[ 971], 00:09:26.087 | 30.00th=[ 996], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1029], 00:09:26.087 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:09:26.087 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:26.087 | 99.99th=[ 1221] 00:09:26.087 write: IOPS=714, BW=2857KiB/s (2926kB/s)(2860KiB/1001msec); 0 zone resets 00:09:26.087 slat (nsec): min=9950, max=71799, avg=30353.26, stdev=11043.21 00:09:26.087 clat (usec): min=288, max=823, avg=609.78, stdev=99.52 00:09:26.087 lat (usec): min=300, max=858, avg=640.13, stdev=104.64 00:09:26.087 clat percentiles (usec): 00:09:26.087 | 1.00th=[ 367], 5.00th=[ 404], 10.00th=[ 457], 20.00th=[ 519], 00:09:26.087 | 30.00th=[ 586], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[ 660], 00:09:26.087 | 70.00th=[ 676], 80.00th=[ 693], 90.00th=[ 717], 95.00th=[ 742], 00:09:26.087 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 824], 99.95th=[ 824], 00:09:26.087 | 99.99th=[ 824] 00:09:26.087 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:26.087 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:26.087 lat (usec) : 500=9.05%, 750=47.43%, 1000=15.89% 00:09:26.087 lat (msec) : 2=27.63% 00:09:26.087 cpu : usr=1.10%, sys=4.40%, ctx=1229, majf=0, minf=1 00:09:26.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.087 issued rwts: total=512,715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.087 00:09:26.087 Run status group 0 (all jobs): 00:09:26.087 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:26.087 WRITE: bw=2857KiB/s (2926kB/s), 2857KiB/s-2857KiB/s (2926kB/s-2926kB/s), io=2860KiB (2929kB), run=1001-1001msec 00:09:26.087 00:09:26.087 Disk stats (read/write): 00:09:26.087 nvme0n1: ios=554/552, merge=0/0, ticks=839/337, in_queue=1176, util=97.19% 00:09:26.087 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.347 rmmod nvme_tcp 00:09:26.347 rmmod nvme_fabrics 00:09:26.347 rmmod nvme_keyring 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 492272 ']' 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 492272 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 492272 ']' 00:09:26.347 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 492272 00:09:26.348 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:26.348 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:26.348 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 492272 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 492272' 00:09:26.607 killing process with pid 492272 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 492272 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 492272 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:26.607 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:26.608 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.608 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.608 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.608 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.608 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.608 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.608 13:33:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.153 13:33:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.153 00:09:29.153 real 0m17.881s 00:09:29.153 user 0m48.201s 00:09:29.153 sys 0m6.666s 00:09:29.153 13:33:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:29.153 13:33:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.153 ************************************ 00:09:29.154 END TEST nvmf_nmic 00:09:29.154 ************************************ 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.154 ************************************ 00:09:29.154 START TEST nvmf_fio_target 00:09:29.154 ************************************ 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:29.154 * Looking for test storage... 00:09:29.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:29.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.154 --rc genhtml_branch_coverage=1 00:09:29.154 --rc genhtml_function_coverage=1 00:09:29.154 --rc genhtml_legend=1 00:09:29.154 --rc geninfo_all_blocks=1 00:09:29.154 --rc geninfo_unexecuted_blocks=1 00:09:29.154 00:09:29.154 ' 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:29.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.154 --rc genhtml_branch_coverage=1 00:09:29.154 --rc genhtml_function_coverage=1 00:09:29.154 --rc genhtml_legend=1 00:09:29.154 --rc geninfo_all_blocks=1 00:09:29.154 --rc geninfo_unexecuted_blocks=1 00:09:29.154 00:09:29.154 ' 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:29.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.154 --rc genhtml_branch_coverage=1 00:09:29.154 --rc genhtml_function_coverage=1 00:09:29.154 --rc genhtml_legend=1 00:09:29.154 --rc geninfo_all_blocks=1 00:09:29.154 --rc geninfo_unexecuted_blocks=1 00:09:29.154 00:09:29.154 ' 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:29.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.154 --rc genhtml_branch_coverage=1 00:09:29.154 --rc genhtml_function_coverage=1 00:09:29.154 --rc genhtml_legend=1 00:09:29.154 --rc geninfo_all_blocks=1 00:09:29.154 --rc geninfo_unexecuted_blocks=1 00:09:29.154 00:09:29.154 ' 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.154 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.155 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:37.301 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:37.301 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:37.301 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:37.302 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:37.302 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:37.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:09:37.302 00:09:37.302 --- 10.0.0.2 ping statistics --- 00:09:37.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.302 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:09:37.302 00:09:37.302 --- 10.0.0.1 ping statistics --- 00:09:37.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.302 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=498340 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 498340 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 498340 ']' 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:37.302 13:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.302 [2024-11-06 13:33:59.904461] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:09:37.302 [2024-11-06 13:33:59.904529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.302 [2024-11-06 13:33:59.989106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.302 [2024-11-06 13:34:00.041773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.302 [2024-11-06 13:34:00.041817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.302 [2024-11-06 13:34:00.041825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.302 [2024-11-06 13:34:00.041832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.302 [2024-11-06 13:34:00.041838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.302 [2024-11-06 13:34:00.043488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.302 [2024-11-06 13:34:00.043608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.302 [2024-11-06 13:34:00.043782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.302 [2024-11-06 13:34:00.043782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.564 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:37.564 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:37.564 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:37.564 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:37.564 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.564 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:37.564 [2024-11-06 13:34:00.904396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.825 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.825 13:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:37.825 13:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.086 13:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:38.086 13:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.347 13:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:38.347 13:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.607 13:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:38.607 13:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:38.607 13:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.866 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:38.866 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.127 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:39.127 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.127 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:39.127 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:39.387 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.648 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.648 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.909 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.909 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.909 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.169 [2024-11-06 13:34:03.373997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.169 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:40.430 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:40.430 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.342 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:42.342 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:42.342 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.342 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:42.342 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:42.342 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:44.278 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:44.278 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:44.278 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.279 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:44.279 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.279 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:44.279 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:44.279 [global] 00:09:44.279 thread=1 00:09:44.279 invalidate=1 00:09:44.279 rw=write 00:09:44.279 time_based=1 00:09:44.279 runtime=1 00:09:44.279 ioengine=libaio 00:09:44.279 direct=1 00:09:44.279 bs=4096 00:09:44.279 iodepth=1 00:09:44.279 norandommap=0 00:09:44.279 numjobs=1 00:09:44.279 00:09:44.279 verify_dump=1 00:09:44.279 verify_backlog=512 00:09:44.279 verify_state_save=0 00:09:44.279 do_verify=1 00:09:44.279 verify=crc32c-intel 00:09:44.279 [job0] 00:09:44.279 filename=/dev/nvme0n1 00:09:44.279 [job1] 00:09:44.279 filename=/dev/nvme0n2 00:09:44.279 [job2] 00:09:44.279 filename=/dev/nvme0n3 00:09:44.279 [job3] 00:09:44.279 filename=/dev/nvme0n4 00:09:44.279 Could not set queue depth (nvme0n1) 00:09:44.279 Could not set queue depth (nvme0n2) 00:09:44.279 Could not set queue depth (nvme0n3) 00:09:44.279 Could not set queue depth (nvme0n4) 00:09:44.543 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.543 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.543 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.543 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.543 fio-3.35 00:09:44.543 Starting 4 threads 00:09:45.951 00:09:45.951 job0: (groupid=0, jobs=1): err= 0: pid=500099: Wed Nov 6 13:34:08 2024 00:09:45.951 read: IOPS=351, BW=1407KiB/s (1440kB/s)(1408KiB/1001msec) 00:09:45.951 slat (nsec): min=25640, max=63148, avg=26718.83, stdev=2794.02 00:09:45.951 clat (usec): min=706, max=41996, avg=1839.42, stdev=5706.15 00:09:45.951 lat (usec): min=733, max=42022, avg=1866.14, stdev=5706.10 00:09:45.951 clat percentiles (usec): 00:09:45.951 | 1.00th=[ 799], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 963], 00:09:45.951 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:09:45.951 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:09:45.951 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:45.951 | 99.99th=[42206] 00:09:45.951 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:45.951 slat (nsec): min=10232, max=62570, avg=31591.34, stdev=10587.62 00:09:45.951 clat (usec): min=206, max=1322, avg=625.73, stdev=129.39 00:09:45.951 lat (usec): min=219, max=1357, avg=657.32, stdev=133.41 00:09:45.951 clat percentiles (usec): 00:09:45.951 | 1.00th=[ 310], 5.00th=[ 400], 10.00th=[ 449], 20.00th=[ 515], 00:09:45.951 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 676], 00:09:45.951 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 816], 00:09:45.951 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 1319], 99.95th=[ 1319], 00:09:45.951 | 99.99th=[ 1319] 00:09:45.951 bw ( KiB/s): min= 4096, max= 4096, per=47.72%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.951 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.951 lat (usec) : 250=0.12%, 500=10.07%, 750=40.62%, 1000=22.45% 00:09:45.951 lat (msec) : 2=25.93%, 50=0.81% 00:09:45.951 cpu : usr=1.10%, sys=2.80%, ctx=866, majf=0, minf=1 00:09:45.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.951 issued rwts: total=352,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.951 job1: (groupid=0, jobs=1): err= 0: pid=500100: Wed Nov 6 13:34:08 2024 00:09:45.951 read: IOPS=31, BW=125KiB/s (128kB/s)(128KiB/1025msec) 00:09:45.951 slat (nsec): min=24873, max=28906, avg=26022.34, stdev=716.76 00:09:45.951 clat (usec): min=915, max=42024, avg=21310.31, stdev=20590.81 00:09:45.951 lat (usec): min=942, max=42049, avg=21336.33, stdev=20590.46 00:09:45.951 clat percentiles (usec): 00:09:45.951 | 1.00th=[ 914], 5.00th=[ 971], 10.00th=[ 1004], 20.00th=[ 1037], 00:09:45.951 | 30.00th=[ 1045], 40.00th=[ 1090], 50.00th=[ 1188], 60.00th=[41157], 00:09:45.951 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:45.951 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:45.951 | 99.99th=[42206] 00:09:45.951 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:45.951 slat (nsec): min=9104, max=52792, avg=29638.37, stdev=9676.85 00:09:45.951 clat (usec): min=246, max=1011, avg=632.13, stdev=123.71 00:09:45.951 lat (usec): min=256, max=1044, avg=661.76, stdev=128.42 00:09:45.951 clat percentiles (usec): 00:09:45.951 | 1.00th=[ 351], 5.00th=[ 404], 10.00th=[ 469], 20.00th=[ 529], 00:09:45.951 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 668], 00:09:45.951 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 832], 00:09:45.951 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1012], 99.95th=[ 1012], 00:09:45.951 | 99.99th=[ 1012] 00:09:45.951 bw ( KiB/s): min= 4096, max= 4096, per=47.72%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.951 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.951 lat (usec) : 250=0.18%, 500=14.34%, 750=64.89%, 1000=14.89% 00:09:45.951 lat (msec) : 2=2.76%, 50=2.94% 00:09:45.951 cpu : usr=1.17%, sys=1.76%, ctx=544, majf=0, minf=2 00:09:45.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.951 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.951 job2: (groupid=0, jobs=1): err= 0: pid=500101: Wed Nov 6 13:34:08 2024 00:09:45.951 read: IOPS=16, BW=66.2KiB/s (67.8kB/s)(68.0KiB/1027msec) 00:09:45.951 slat (nsec): min=28066, max=29290, avg=28509.82, stdev=360.38 00:09:45.951 clat (usec): min=40922, max=42020, avg=41614.67, stdev=469.08 00:09:45.951 lat (usec): min=40951, max=42048, avg=41643.18, stdev=468.86 00:09:45.951 clat percentiles (usec): 00:09:45.951 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:45.951 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:45.952 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:45.952 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:45.952 | 99.99th=[42206] 00:09:45.952 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:09:45.952 slat (nsec): min=9777, max=79069, avg=35513.31, stdev=10403.34 00:09:45.952 clat (usec): min=168, max=1000, avg=580.40, stdev=144.36 00:09:45.952 lat (usec): min=184, max=1036, avg=615.92, stdev=147.39 00:09:45.952 clat percentiles (usec): 00:09:45.952 | 1.00th=[ 281], 5.00th=[ 343], 10.00th=[ 396], 20.00th=[ 461], 00:09:45.952 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 611], 00:09:45.952 | 70.00th=[ 652], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 832], 00:09:45.952 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 1004], 99.95th=[ 1004], 00:09:45.952 | 99.99th=[ 1004] 00:09:45.952 bw ( KiB/s): min= 4096, max= 4096, per=47.72%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.952 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.952 lat (usec) : 250=0.38%, 500=29.87%, 750=55.01%, 1000=11.34% 00:09:45.952 lat (msec) : 2=0.19%, 50=3.21% 00:09:45.952 cpu : usr=0.88%, sys=2.34%, ctx=530, majf=0, minf=1 00:09:45.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.952 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.952 job3: (groupid=0, jobs=1): err= 0: pid=500102: Wed Nov 6 13:34:08 2024 00:09:45.952 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:45.952 slat (nsec): min=9026, max=93322, avg=28484.31, stdev=4472.53 00:09:45.952 clat (usec): min=765, max=1313, avg=1087.39, stdev=68.52 00:09:45.952 lat (usec): min=794, max=1341, avg=1115.88, stdev=69.33 00:09:45.952 clat percentiles (usec): 00:09:45.952 | 1.00th=[ 889], 5.00th=[ 971], 10.00th=[ 1004], 20.00th=[ 1037], 00:09:45.952 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:09:45.952 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:09:45.952 | 99.00th=[ 1221], 99.50th=[ 1270], 99.90th=[ 1319], 99.95th=[ 1319], 00:09:45.952 | 99.99th=[ 1319] 00:09:45.952 write: IOPS=667, BW=2669KiB/s (2733kB/s)(2672KiB/1001msec); 0 zone resets 00:09:45.952 slat (nsec): min=9840, max=67506, avg=32163.28, stdev=11389.25 00:09:45.952 clat (usec): min=225, max=857, avg=595.32, stdev=113.56 00:09:45.952 lat (usec): min=245, max=902, avg=627.49, stdev=118.57 00:09:45.952 clat percentiles (usec): 00:09:45.952 | 1.00th=[ 277], 5.00th=[ 396], 10.00th=[ 453], 20.00th=[ 494], 00:09:45.952 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:09:45.952 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 766], 00:09:45.952 | 99.00th=[ 832], 99.50th=[ 840], 99.90th=[ 857], 99.95th=[ 857], 00:09:45.952 | 99.99th=[ 857] 00:09:45.952 bw ( KiB/s): min= 4096, max= 4096, per=47.72%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.952 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.952 lat (usec) : 250=0.25%, 500=11.78%, 750=40.25%, 1000=8.05% 00:09:45.952 lat (msec) : 2=39.66% 00:09:45.952 cpu : usr=3.30%, sys=4.00%, ctx=1182, majf=0, minf=1 00:09:45.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.952 issued rwts: total=512,668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.952 00:09:45.952 Run status group 0 (all jobs): 00:09:45.952 READ: bw=3556KiB/s (3641kB/s), 66.2KiB/s-2046KiB/s (67.8kB/s-2095kB/s), io=3652KiB (3740kB), run=1001-1027msec 00:09:45.952 WRITE: bw=8584KiB/s (8790kB/s), 1994KiB/s-2669KiB/s (2042kB/s-2733kB/s), io=8816KiB (9028kB), run=1001-1027msec 00:09:45.952 00:09:45.952 Disk stats (read/write): 00:09:45.952 nvme0n1: ios=238/512, merge=0/0, ticks=569/300, in_queue=869, util=86.97% 00:09:45.952 nvme0n2: ios=77/512, merge=0/0, ticks=565/254, in_queue=819, util=90.81% 00:09:45.952 nvme0n3: ios=61/512, merge=0/0, ticks=652/230, in_queue=882, util=95.13% 00:09:45.952 nvme0n4: ios=494/512, merge=0/0, ticks=714/253, in_queue=967, util=97.22% 00:09:45.952 13:34:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:45.952 [global] 00:09:45.952 thread=1 00:09:45.952 invalidate=1 00:09:45.952 rw=randwrite 00:09:45.952 time_based=1 00:09:45.952 runtime=1 00:09:45.952 ioengine=libaio 00:09:45.952 direct=1 00:09:45.952 bs=4096 00:09:45.952 iodepth=1 00:09:45.952 norandommap=0 00:09:45.952 numjobs=1 00:09:45.952 00:09:45.952 verify_dump=1 00:09:45.952 verify_backlog=512 00:09:45.952 verify_state_save=0 00:09:45.952 do_verify=1 00:09:45.952 verify=crc32c-intel 00:09:45.952 [job0] 00:09:45.952 filename=/dev/nvme0n1 00:09:45.952 [job1] 00:09:45.952 filename=/dev/nvme0n2 00:09:45.952 [job2] 00:09:45.952 filename=/dev/nvme0n3 00:09:45.952 [job3] 00:09:45.952 filename=/dev/nvme0n4 00:09:45.952 Could not set queue depth (nvme0n1) 00:09:45.952 Could not set queue depth (nvme0n2) 00:09:45.952 Could not set queue depth (nvme0n3) 00:09:45.952 Could not set queue depth (nvme0n4) 00:09:46.213 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.214 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.214 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.214 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.214 fio-3.35 00:09:46.214 Starting 4 threads 00:09:47.617 00:09:47.617 job0: (groupid=0, jobs=1): err= 0: pid=500626: Wed Nov 6 13:34:10 2024 00:09:47.617 read: IOPS=49, BW=200KiB/s (204kB/s)(204KiB/1022msec) 00:09:47.617 slat (nsec): min=7708, max=44490, avg=26074.65, stdev=7088.19 00:09:47.617 clat (usec): min=635, max=41341, avg=15533.15, stdev=19364.07 00:09:47.617 lat (usec): min=662, max=41368, avg=15559.23, stdev=19364.95 00:09:47.617 clat percentiles (usec): 00:09:47.617 | 1.00th=[ 635], 5.00th=[ 693], 10.00th=[ 783], 20.00th=[ 807], 00:09:47.617 | 30.00th=[ 832], 40.00th=[ 840], 50.00th=[ 857], 60.00th=[ 914], 00:09:47.617 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:47.617 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:47.617 | 99.99th=[41157] 00:09:47.617 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:09:47.617 slat (nsec): min=3160, max=52894, avg=25046.21, stdev=12238.03 00:09:47.617 clat (usec): min=137, max=1398, avg=413.35, stdev=93.40 00:09:47.617 lat (usec): min=148, max=1402, avg=438.39, stdev=97.90 00:09:47.617 clat percentiles (usec): 00:09:47.617 | 1.00th=[ 237], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 330], 00:09:47.617 | 30.00th=[ 359], 40.00th=[ 400], 50.00th=[ 429], 60.00th=[ 445], 00:09:47.617 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 523], 00:09:47.617 | 99.00th=[ 627], 99.50th=[ 758], 99.90th=[ 1401], 99.95th=[ 1401], 00:09:47.617 | 99.99th=[ 1401] 00:09:47.617 bw ( KiB/s): min= 4096, max= 4096, per=42.75%, avg=4096.00, stdev= 0.00, samples=1 00:09:47.617 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:47.617 lat (usec) : 250=1.24%, 500=79.75%, 750=10.12%, 1000=5.33% 00:09:47.617 lat (msec) : 2=0.18%, 50=3.37% 00:09:47.617 cpu : usr=0.49%, sys=1.57%, ctx=566, majf=0, minf=1 00:09:47.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.617 issued rwts: total=51,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.617 job1: (groupid=0, jobs=1): err= 0: pid=500627: Wed Nov 6 13:34:10 2024 00:09:47.617 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:47.617 slat (nsec): min=7252, max=78639, avg=24712.61, stdev=8835.24 00:09:47.617 clat (usec): min=456, max=41514, avg=1206.69, stdev=4100.15 00:09:47.617 lat (usec): min=484, max=41542, avg=1231.40, stdev=4100.47 00:09:47.617 clat percentiles (usec): 00:09:47.617 | 1.00th=[ 498], 5.00th=[ 603], 10.00th=[ 660], 20.00th=[ 701], 00:09:47.617 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 799], 00:09:47.617 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 889], 00:09:47.617 | 99.00th=[24511], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:47.617 | 99.99th=[41681] 00:09:47.617 write: IOPS=911, BW=3644KiB/s (3732kB/s)(3648KiB/1001msec); 0 zone resets 00:09:47.617 slat (nsec): min=9594, max=71227, avg=20752.36, stdev=12090.85 00:09:47.617 clat (usec): min=136, max=591, avg=375.60, stdev=77.38 00:09:47.617 lat (usec): min=147, max=620, avg=396.35, stdev=85.80 00:09:47.617 clat percentiles (usec): 00:09:47.617 | 1.00th=[ 233], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 302], 00:09:47.617 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 388], 00:09:47.617 | 70.00th=[ 437], 80.00th=[ 461], 90.00th=[ 478], 95.00th=[ 490], 00:09:47.617 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 594], 99.95th=[ 594], 00:09:47.617 | 99.99th=[ 594] 00:09:47.617 bw ( KiB/s): min= 4096, max= 4096, per=42.75%, avg=4096.00, stdev= 0.00, samples=1 00:09:47.617 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:47.617 lat (usec) : 250=1.05%, 500=60.88%, 750=14.75%, 1000=22.82% 00:09:47.617 lat (msec) : 2=0.07%, 50=0.42% 00:09:47.618 cpu : usr=1.70%, sys=3.20%, ctx=1425, majf=0, minf=1 00:09:47.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.618 issued rwts: total=512,912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.618 job2: (groupid=0, jobs=1): err= 0: pid=500628: Wed Nov 6 13:34:10 2024 00:09:47.618 read: IOPS=117, BW=472KiB/s (483kB/s)(472KiB/1001msec) 00:09:47.618 slat (nsec): min=7362, max=41230, avg=26498.48, stdev=3557.70 00:09:47.618 clat (usec): min=748, max=42046, avg=5818.03, stdev=13294.41 00:09:47.618 lat (usec): min=775, max=42073, avg=5844.52, stdev=13294.20 00:09:47.618 clat percentiles (usec): 00:09:47.618 | 1.00th=[ 791], 5.00th=[ 816], 10.00th=[ 848], 20.00th=[ 898], 00:09:47.618 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:09:47.618 | 70.00th=[ 1020], 80.00th=[ 1074], 90.00th=[41681], 95.00th=[42206], 00:09:47.618 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:47.618 | 99.99th=[42206] 00:09:47.618 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:47.618 slat (nsec): min=10002, max=54553, avg=30366.92, stdev=9512.24 00:09:47.618 clat (usec): min=253, max=1004, avg=566.68, stdev=118.43 00:09:47.618 lat (usec): min=264, max=1037, avg=597.05, stdev=122.82 00:09:47.618 clat percentiles (usec): 00:09:47.618 | 1.00th=[ 322], 5.00th=[ 363], 10.00th=[ 408], 20.00th=[ 469], 00:09:47.618 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 578], 60.00th=[ 594], 00:09:47.618 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 750], 00:09:47.618 | 99.00th=[ 807], 99.50th=[ 840], 99.90th=[ 1004], 99.95th=[ 1004], 00:09:47.618 | 99.99th=[ 1004] 00:09:47.618 bw ( KiB/s): min= 4096, max= 4096, per=42.75%, avg=4096.00, stdev= 0.00, samples=1 00:09:47.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:47.618 lat (usec) : 500=25.87%, 750=51.43%, 1000=16.19% 00:09:47.618 lat (msec) : 2=4.29%, 50=2.22% 00:09:47.618 cpu : usr=1.30%, sys=1.50%, ctx=631, majf=0, minf=1 00:09:47.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.618 issued rwts: total=118,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.618 job3: (groupid=0, jobs=1): err= 0: pid=500629: Wed Nov 6 13:34:10 2024 00:09:47.618 read: IOPS=43, BW=173KiB/s (177kB/s)(176KiB/1018msec) 00:09:47.618 slat (nsec): min=7614, max=30633, avg=23388.89, stdev=7626.28 00:09:47.618 clat (usec): min=616, max=42092, avg=15708.37, stdev=20001.20 00:09:47.618 lat (usec): min=632, max=42119, avg=15731.76, stdev=20003.70 00:09:47.618 clat percentiles (usec): 00:09:47.618 | 1.00th=[ 619], 5.00th=[ 652], 10.00th=[ 660], 20.00th=[ 734], 00:09:47.618 | 30.00th=[ 775], 40.00th=[ 799], 50.00th=[ 824], 60.00th=[ 865], 00:09:47.618 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:47.618 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:47.618 | 99.99th=[42206] 00:09:47.618 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:09:47.618 slat (nsec): min=9975, max=64117, avg=30984.08, stdev=8498.35 00:09:47.618 clat (usec): min=277, max=923, avg=595.65, stdev=117.71 00:09:47.618 lat (usec): min=288, max=957, avg=626.63, stdev=119.95 00:09:47.618 clat percentiles (usec): 00:09:47.618 | 1.00th=[ 351], 5.00th=[ 396], 10.00th=[ 441], 20.00th=[ 486], 00:09:47.618 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 644], 00:09:47.618 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 775], 00:09:47.618 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 922], 99.95th=[ 922], 00:09:47.618 | 99.99th=[ 922] 00:09:47.618 bw ( KiB/s): min= 4096, max= 4096, per=42.75%, avg=4096.00, stdev= 0.00, samples=1 00:09:47.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:47.618 lat (usec) : 500=22.30%, 750=63.49%, 1000=11.33% 00:09:47.618 lat (msec) : 50=2.88% 00:09:47.618 cpu : usr=0.88%, sys=1.57%, ctx=557, majf=0, minf=1 00:09:47.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.618 issued rwts: total=44,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.618 00:09:47.618 Run status group 0 (all jobs): 00:09:47.618 READ: bw=2838KiB/s (2906kB/s), 173KiB/s-2046KiB/s (177kB/s-2095kB/s), io=2900KiB (2970kB), run=1001-1022msec 00:09:47.618 WRITE: bw=9581KiB/s (9811kB/s), 2004KiB/s-3644KiB/s (2052kB/s-3732kB/s), io=9792KiB (10.0MB), run=1001-1022msec 00:09:47.618 00:09:47.618 Disk stats (read/write): 00:09:47.618 nvme0n1: ios=80/512, merge=0/0, ticks=1043/198, in_queue=1241, util=99.70% 00:09:47.618 nvme0n2: ios=543/553, merge=0/0, ticks=797/186, in_queue=983, util=96.53% 00:09:47.618 nvme0n3: ios=157/512, merge=0/0, ticks=1355/275, in_queue=1630, util=96.73% 00:09:47.618 nvme0n4: ios=77/512, merge=0/0, ticks=1265/290, in_queue=1555, util=98.72% 00:09:47.618 13:34:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:47.618 [global] 00:09:47.618 thread=1 00:09:47.618 invalidate=1 00:09:47.618 rw=write 00:09:47.618 time_based=1 00:09:47.618 runtime=1 00:09:47.618 ioengine=libaio 00:09:47.618 direct=1 00:09:47.618 bs=4096 00:09:47.618 iodepth=128 00:09:47.618 norandommap=0 00:09:47.618 numjobs=1 00:09:47.618 00:09:47.618 verify_dump=1 00:09:47.618 verify_backlog=512 00:09:47.618 verify_state_save=0 00:09:47.618 do_verify=1 00:09:47.618 verify=crc32c-intel 00:09:47.618 [job0] 00:09:47.618 filename=/dev/nvme0n1 00:09:47.618 [job1] 00:09:47.618 filename=/dev/nvme0n2 00:09:47.618 [job2] 00:09:47.618 filename=/dev/nvme0n3 00:09:47.618 [job3] 00:09:47.618 filename=/dev/nvme0n4 00:09:47.618 Could not set queue depth (nvme0n1) 00:09:47.618 Could not set queue depth (nvme0n2) 00:09:47.618 Could not set queue depth (nvme0n3) 00:09:47.618 Could not set queue depth (nvme0n4) 00:09:47.882 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.882 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.882 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.882 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.882 fio-3.35 00:09:47.882 Starting 4 threads 00:09:49.286 00:09:49.286 job0: (groupid=0, jobs=1): err= 0: pid=501154: Wed Nov 6 13:34:12 2024 00:09:49.286 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:09:49.286 slat (nsec): min=926, max=11769k, avg=82940.42, stdev=559066.05 00:09:49.286 clat (usec): min=4881, max=28254, avg=10674.12, stdev=3000.07 00:09:49.286 lat (usec): min=4890, max=28258, avg=10757.06, stdev=3047.33 00:09:49.286 clat percentiles (usec): 00:09:49.286 | 1.00th=[ 5538], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 8225], 00:09:49.286 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[11076], 00:09:49.286 | 70.00th=[11994], 80.00th=[13173], 90.00th=[14877], 95.00th=[16450], 00:09:49.286 | 99.00th=[17957], 99.50th=[19268], 99.90th=[22938], 99.95th=[28181], 00:09:49.286 | 99.99th=[28181] 00:09:49.286 write: IOPS=6371, BW=24.9MiB/s (26.1MB/s)(25.1MiB/1008msec); 0 zone resets 00:09:49.286 slat (nsec): min=1623, max=12099k, avg=70894.73, stdev=473877.55 00:09:49.286 clat (usec): min=1876, max=30056, avg=9695.22, stdev=3551.96 00:09:49.286 lat (usec): min=1894, max=30064, avg=9766.12, stdev=3587.85 00:09:49.286 clat percentiles (usec): 00:09:49.286 | 1.00th=[ 4817], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 7111], 00:09:49.286 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:09:49.286 | 70.00th=[10159], 80.00th=[11207], 90.00th=[12649], 95.00th=[16188], 00:09:49.286 | 99.00th=[26870], 99.50th=[28443], 99.90th=[30016], 99.95th=[30016], 00:09:49.286 | 99.99th=[30016] 00:09:49.286 bw ( KiB/s): min=24600, max=25752, per=26.45%, avg=25176.00, stdev=814.59, samples=2 00:09:49.286 iops : min= 6150, max= 6438, avg=6294.00, stdev=203.65, samples=2 00:09:49.286 lat (msec) : 2=0.09%, 4=0.11%, 10=56.86%, 20=41.72%, 50=1.22% 00:09:49.286 cpu : usr=3.28%, sys=7.45%, ctx=619, majf=0, minf=1 00:09:49.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:49.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.286 issued rwts: total=6144,6422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.286 job1: (groupid=0, jobs=1): err= 0: pid=501155: Wed Nov 6 13:34:12 2024 00:09:49.286 read: IOPS=7189, BW=28.1MiB/s (29.4MB/s)(28.2MiB/1003msec) 00:09:49.286 slat (nsec): min=901, max=8461.9k, avg=59932.47, stdev=422008.37 00:09:49.286 clat (usec): min=1122, max=49685, avg=8640.82, stdev=4035.00 00:09:49.286 lat (usec): min=2502, max=49689, avg=8700.75, stdev=4058.10 00:09:49.286 clat percentiles (usec): 00:09:49.286 | 1.00th=[ 3851], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6587], 00:09:49.286 | 30.00th=[ 6915], 40.00th=[ 7308], 50.00th=[ 7767], 60.00th=[ 8094], 00:09:49.286 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[12387], 95.00th=[13829], 00:09:49.286 | 99.00th=[28705], 99.50th=[32113], 99.90th=[45876], 99.95th=[45876], 00:09:49.286 | 99.99th=[49546] 00:09:49.286 write: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec); 0 zone resets 00:09:49.286 slat (nsec): min=1542, max=10403k, avg=53606.14, stdev=351640.17 00:09:49.286 clat (usec): min=533, max=50862, avg=7924.53, stdev=5398.73 00:09:49.286 lat (usec): min=585, max=50864, avg=7978.14, stdev=5426.69 00:09:49.286 clat percentiles (usec): 00:09:49.286 | 1.00th=[ 1614], 5.00th=[ 3326], 10.00th=[ 4113], 20.00th=[ 5800], 00:09:49.286 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7046], 00:09:49.286 | 70.00th=[ 7373], 80.00th=[ 8160], 90.00th=[11863], 95.00th=[19268], 00:09:49.286 | 99.00th=[31851], 99.50th=[38536], 99.90th=[46400], 99.95th=[46924], 00:09:49.286 | 99.99th=[51119] 00:09:49.286 bw ( KiB/s): min=28672, max=36184, per=34.07%, avg=32428.00, stdev=5311.79, samples=2 00:09:49.286 iops : min= 7168, max= 9046, avg=8107.00, stdev=1327.95, samples=2 00:09:49.286 lat (usec) : 750=0.04%, 1000=0.10% 00:09:49.286 lat (msec) : 2=0.72%, 4=3.99%, 10=80.67%, 20=10.42%, 50=4.04% 00:09:49.286 lat (msec) : 100=0.01% 00:09:49.286 cpu : usr=5.29%, sys=7.58%, ctx=717, majf=0, minf=1 00:09:49.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:49.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.286 issued rwts: total=7211,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.286 job2: (groupid=0, jobs=1): err= 0: pid=501156: Wed Nov 6 13:34:12 2024 00:09:49.286 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:09:49.286 slat (nsec): min=1000, max=8845.1k, avg=112039.80, stdev=641383.44 00:09:49.286 clat (usec): min=3105, max=32749, avg=14648.52, stdev=4087.88 00:09:49.286 lat (usec): min=3114, max=32798, avg=14760.56, stdev=4139.35 00:09:49.286 clat percentiles (usec): 00:09:49.286 | 1.00th=[ 6980], 5.00th=[ 8029], 10.00th=[10028], 20.00th=[12256], 00:09:49.286 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14091], 60.00th=[14615], 00:09:49.286 | 70.00th=[15664], 80.00th=[17171], 90.00th=[19792], 95.00th=[22938], 00:09:49.286 | 99.00th=[27657], 99.50th=[28181], 99.90th=[28181], 99.95th=[29230], 00:09:49.286 | 99.99th=[32637] 00:09:49.286 write: IOPS=4747, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1003msec); 0 zone resets 00:09:49.286 slat (nsec): min=1757, max=6681.6k, avg=94684.14, stdev=459543.63 00:09:49.286 clat (usec): min=1521, max=30709, avg=12480.16, stdev=4987.28 00:09:49.286 lat (usec): min=1531, max=31413, avg=12574.85, stdev=5027.17 00:09:49.286 clat percentiles (usec): 00:09:49.286 | 1.00th=[ 3949], 5.00th=[ 5604], 10.00th=[ 6587], 20.00th=[ 8455], 00:09:49.286 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11469], 60.00th=[13173], 00:09:49.286 | 70.00th=[14484], 80.00th=[15664], 90.00th=[18220], 95.00th=[23725], 00:09:49.286 | 99.00th=[26870], 99.50th=[28181], 99.90th=[30802], 99.95th=[30802], 00:09:49.286 | 99.99th=[30802] 00:09:49.286 bw ( KiB/s): min=16592, max=20480, per=19.48%, avg=18536.00, stdev=2749.23, samples=2 00:09:49.286 iops : min= 4148, max= 5120, avg=4634.00, stdev=687.31, samples=2 00:09:49.286 lat (msec) : 2=0.17%, 4=0.76%, 10=19.82%, 20=70.13%, 50=9.12% 00:09:49.286 cpu : usr=3.89%, sys=4.89%, ctx=484, majf=0, minf=1 00:09:49.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:49.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.286 issued rwts: total=4608,4762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.286 job3: (groupid=0, jobs=1): err= 0: pid=501157: Wed Nov 6 13:34:12 2024 00:09:49.286 read: IOPS=4111, BW=16.1MiB/s (16.8MB/s)(16.1MiB/1004msec) 00:09:49.286 slat (nsec): min=962, max=10962k, avg=112546.20, stdev=735524.35 00:09:49.286 clat (usec): min=1973, max=30068, avg=14059.31, stdev=4932.00 00:09:49.286 lat (usec): min=4081, max=30074, avg=14171.86, stdev=4971.06 00:09:49.286 clat percentiles (usec): 00:09:49.286 | 1.00th=[ 5538], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 9765], 00:09:49.286 | 30.00th=[10683], 40.00th=[12125], 50.00th=[13173], 60.00th=[15401], 00:09:49.286 | 70.00th=[16712], 80.00th=[18482], 90.00th=[20317], 95.00th=[22676], 00:09:49.286 | 99.00th=[28181], 99.50th=[28967], 99.90th=[30016], 99.95th=[30016], 00:09:49.286 | 99.99th=[30016] 00:09:49.286 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:09:49.286 slat (nsec): min=1687, max=14854k, avg=109887.56, stdev=565493.23 00:09:49.286 clat (usec): min=2178, max=50730, avg=15029.41, stdev=8457.14 00:09:49.286 lat (usec): min=2186, max=50758, avg=15139.30, stdev=8517.33 00:09:49.286 clat percentiles (usec): 00:09:49.286 | 1.00th=[ 3163], 5.00th=[ 6456], 10.00th=[ 7570], 20.00th=[ 8979], 00:09:49.286 | 30.00th=[11076], 40.00th=[11994], 50.00th=[13173], 60.00th=[15270], 00:09:49.286 | 70.00th=[16188], 80.00th=[17695], 90.00th=[22414], 95.00th=[36439], 00:09:49.286 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:09:49.286 | 99.99th=[50594] 00:09:49.286 bw ( KiB/s): min=16320, max=19776, per=18.96%, avg=18048.00, stdev=2443.76, samples=2 00:09:49.286 iops : min= 4080, max= 4944, avg=4512.00, stdev=610.94, samples=2 00:09:49.286 lat (msec) : 2=0.01%, 4=0.79%, 10=24.42%, 20=63.75%, 50=10.95% 00:09:49.286 lat (msec) : 100=0.08% 00:09:49.286 cpu : usr=3.39%, sys=4.29%, ctx=510, majf=0, minf=1 00:09:49.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:49.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.286 issued rwts: total=4128,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.286 00:09:49.286 Run status group 0 (all jobs): 00:09:49.286 READ: bw=85.6MiB/s (89.8MB/s), 16.1MiB/s-28.1MiB/s (16.8MB/s-29.4MB/s), io=86.3MiB (90.5MB), run=1003-1008msec 00:09:49.286 WRITE: bw=92.9MiB/s (97.5MB/s), 17.9MiB/s-31.9MiB/s (18.8MB/s-33.5MB/s), io=93.7MiB (98.2MB), run=1003-1008msec 00:09:49.286 00:09:49.286 Disk stats (read/write): 00:09:49.286 nvme0n1: ios=5154/5303, merge=0/0, ticks=34723/29950, in_queue=64673, util=99.40% 00:09:49.286 nvme0n2: ios=6015/6656, merge=0/0, ticks=44067/41258, in_queue=85325, util=100.00% 00:09:49.286 nvme0n3: ios=3638/4096, merge=0/0, ticks=23881/21845, in_queue=45726, util=96.84% 00:09:49.286 nvme0n4: ios=3606/4096, merge=0/0, ticks=33379/45563, in_queue=78942, util=96.69% 00:09:49.286 13:34:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:49.286 [global] 00:09:49.286 thread=1 00:09:49.286 invalidate=1 00:09:49.286 rw=randwrite 00:09:49.286 time_based=1 00:09:49.286 runtime=1 00:09:49.287 ioengine=libaio 00:09:49.287 direct=1 00:09:49.287 bs=4096 00:09:49.287 iodepth=128 00:09:49.287 norandommap=0 00:09:49.287 numjobs=1 00:09:49.287 00:09:49.287 verify_dump=1 00:09:49.287 verify_backlog=512 00:09:49.287 verify_state_save=0 00:09:49.287 do_verify=1 00:09:49.287 verify=crc32c-intel 00:09:49.287 [job0] 00:09:49.287 filename=/dev/nvme0n1 00:09:49.287 [job1] 00:09:49.287 filename=/dev/nvme0n2 00:09:49.287 [job2] 00:09:49.287 filename=/dev/nvme0n3 00:09:49.287 [job3] 00:09:49.287 filename=/dev/nvme0n4 00:09:49.287 Could not set queue depth (nvme0n1) 00:09:49.287 Could not set queue depth (nvme0n2) 00:09:49.287 Could not set queue depth (nvme0n3) 00:09:49.287 Could not set queue depth (nvme0n4) 00:09:49.553 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.553 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.553 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.553 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.553 fio-3.35 00:09:49.553 Starting 4 threads 00:09:50.959 00:09:50.959 job0: (groupid=0, jobs=1): err= 0: pid=501675: Wed Nov 6 13:34:13 2024 00:09:50.959 read: IOPS=7719, BW=30.2MiB/s (31.6MB/s)(30.3MiB/1006msec) 00:09:50.959 slat (nsec): min=906, max=6642.0k, avg=66571.02, stdev=473319.61 00:09:50.959 clat (usec): min=2965, max=21042, avg=8512.66, stdev=2585.26 00:09:50.959 lat (usec): min=2984, max=21066, avg=8579.23, stdev=2619.90 00:09:50.959 clat percentiles (usec): 00:09:50.959 | 1.00th=[ 3654], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 6783], 00:09:50.959 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 8160], 00:09:50.959 | 70.00th=[ 9110], 80.00th=[10683], 90.00th=[12256], 95.00th=[13698], 00:09:50.959 | 99.00th=[15664], 99.50th=[18220], 99.90th=[19792], 99.95th=[20841], 00:09:50.959 | 99.99th=[21103] 00:09:50.959 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:09:50.959 slat (nsec): min=1525, max=8856.5k, avg=53861.72, stdev=311728.74 00:09:50.959 clat (usec): min=1631, max=30005, avg=7463.81, stdev=3196.10 00:09:50.959 lat (usec): min=1640, max=30012, avg=7517.67, stdev=3219.55 00:09:50.959 clat percentiles (usec): 00:09:50.959 | 1.00th=[ 2704], 5.00th=[ 3818], 10.00th=[ 4490], 20.00th=[ 5735], 00:09:50.959 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7177], 00:09:50.959 | 70.00th=[ 7767], 80.00th=[ 8455], 90.00th=[10683], 95.00th=[11731], 00:09:50.959 | 99.00th=[25297], 99.50th=[26346], 99.90th=[29754], 99.95th=[30016], 00:09:50.959 | 99.99th=[30016] 00:09:50.959 bw ( KiB/s): min=28672, max=36536, per=33.36%, avg=32604.00, stdev=5560.69, samples=2 00:09:50.959 iops : min= 7168, max= 9134, avg=8151.00, stdev=1390.17, samples=2 00:09:50.959 lat (msec) : 2=0.07%, 4=4.07%, 10=78.38%, 20=16.63%, 50=0.85% 00:09:50.959 cpu : usr=4.48%, sys=8.36%, ctx=830, majf=0, minf=1 00:09:50.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:50.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.959 issued rwts: total=7766,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.959 job1: (groupid=0, jobs=1): err= 0: pid=501678: Wed Nov 6 13:34:13 2024 00:09:50.959 read: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec) 00:09:50.959 slat (nsec): min=900, max=44385k, avg=67038.30, stdev=659656.83 00:09:50.959 clat (usec): min=2947, max=58237, avg=8772.74, stdev=6172.10 00:09:50.959 lat (usec): min=2955, max=58246, avg=8839.78, stdev=6204.79 00:09:50.959 clat percentiles (usec): 00:09:50.959 | 1.00th=[ 4424], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6652], 00:09:50.960 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 8094], 00:09:50.960 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[10814], 95.00th=[12256], 00:09:50.960 | 99.00th=[52691], 99.50th=[53740], 99.90th=[58459], 99.95th=[58459], 00:09:50.960 | 99.99th=[58459] 00:09:50.960 write: IOPS=7681, BW=30.0MiB/s (31.5MB/s)(30.2MiB/1005msec); 0 zone resets 00:09:50.960 slat (nsec): min=1488, max=6362.5k, avg=57040.39, stdev=352711.48 00:09:50.960 clat (usec): min=2400, max=20130, avg=7738.73, stdev=2675.38 00:09:50.960 lat (usec): min=2409, max=20132, avg=7795.77, stdev=2686.12 00:09:50.960 clat percentiles (usec): 00:09:50.960 | 1.00th=[ 3228], 5.00th=[ 4146], 10.00th=[ 4621], 20.00th=[ 6259], 00:09:50.960 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7570], 00:09:50.960 | 70.00th=[ 8225], 80.00th=[ 9241], 90.00th=[10552], 95.00th=[14353], 00:09:50.960 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:09:50.960 | 99.99th=[20055] 00:09:50.960 bw ( KiB/s): min=27912, max=33528, per=31.43%, avg=30720.00, stdev=3971.11, samples=2 00:09:50.960 iops : min= 6978, max= 8382, avg=7680.00, stdev=992.78, samples=2 00:09:50.960 lat (msec) : 4=2.01%, 10=84.49%, 20=12.63%, 50=0.08%, 100=0.79% 00:09:50.960 cpu : usr=5.28%, sys=6.97%, ctx=624, majf=0, minf=1 00:09:50.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:50.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.960 issued rwts: total=7680,7720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.960 job2: (groupid=0, jobs=1): err= 0: pid=501679: Wed Nov 6 13:34:13 2024 00:09:50.960 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:50.960 slat (nsec): min=1036, max=15779k, avg=138638.74, stdev=989441.82 00:09:50.960 clat (usec): min=3467, max=47040, avg=16695.98, stdev=7245.17 00:09:50.960 lat (usec): min=3472, max=47048, avg=16834.62, stdev=7331.57 00:09:50.960 clat percentiles (usec): 00:09:50.960 | 1.00th=[ 4948], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10683], 00:09:50.960 | 30.00th=[11600], 40.00th=[12780], 50.00th=[14091], 60.00th=[16450], 00:09:50.960 | 70.00th=[21103], 80.00th=[21890], 90.00th=[26608], 95.00th=[31327], 00:09:50.960 | 99.00th=[36963], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:09:50.960 | 99.99th=[46924] 00:09:50.960 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1003msec); 0 zone resets 00:09:50.960 slat (nsec): min=1688, max=19290k, avg=152064.98, stdev=953951.17 00:09:50.960 clat (usec): min=2361, max=94378, avg=21068.00, stdev=14520.98 00:09:50.960 lat (usec): min=2369, max=94383, avg=21220.06, stdev=14600.90 00:09:50.960 clat percentiles (usec): 00:09:50.960 | 1.00th=[ 2638], 5.00th=[ 7242], 10.00th=[ 8586], 20.00th=[11076], 00:09:50.960 | 30.00th=[14091], 40.00th=[17171], 50.00th=[18220], 60.00th=[20579], 00:09:50.960 | 70.00th=[23200], 80.00th=[24511], 90.00th=[33162], 95.00th=[52167], 00:09:50.960 | 99.00th=[90702], 99.50th=[92799], 99.90th=[94897], 99.95th=[94897], 00:09:50.960 | 99.99th=[94897] 00:09:50.960 bw ( KiB/s): min=12288, max=15088, per=14.00%, avg=13688.00, stdev=1979.90, samples=2 00:09:50.960 iops : min= 3072, max= 3772, avg=3422.00, stdev=494.97, samples=2 00:09:50.960 lat (msec) : 4=1.03%, 10=15.92%, 20=43.11%, 50=36.90%, 100=3.05% 00:09:50.960 cpu : usr=2.30%, sys=4.09%, ctx=337, majf=0, minf=2 00:09:50.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:50.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.960 issued rwts: total=3072,3549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.960 job3: (groupid=0, jobs=1): err= 0: pid=501680: Wed Nov 6 13:34:13 2024 00:09:50.960 read: IOPS=4902, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1004msec) 00:09:50.960 slat (usec): min=2, max=26500, avg=105.11, stdev=947.35 00:09:50.960 clat (usec): min=1980, max=60103, avg=13653.40, stdev=7070.62 00:09:50.960 lat (usec): min=2019, max=62313, avg=13758.51, stdev=7152.08 00:09:50.960 clat percentiles (usec): 00:09:50.960 | 1.00th=[ 4228], 5.00th=[ 7701], 10.00th=[ 8979], 20.00th=[ 9503], 00:09:50.960 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11863], 60.00th=[12518], 00:09:50.960 | 70.00th=[13435], 80.00th=[15008], 90.00th=[21890], 95.00th=[31589], 00:09:50.960 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[47973], 00:09:50.960 | 99.99th=[60031] 00:09:50.960 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:09:50.960 slat (nsec): min=1605, max=19826k, avg=80324.35, stdev=651124.59 00:09:50.960 clat (usec): min=720, max=46003, avg=11748.19, stdev=7017.04 00:09:50.960 lat (usec): min=753, max=46006, avg=11828.51, stdev=7060.53 00:09:50.960 clat percentiles (usec): 00:09:50.960 | 1.00th=[ 1303], 5.00th=[ 3228], 10.00th=[ 4883], 20.00th=[ 5866], 00:09:50.960 | 30.00th=[ 7635], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[10945], 00:09:50.960 | 70.00th=[13960], 80.00th=[16909], 90.00th=[23200], 95.00th=[26346], 00:09:50.960 | 99.00th=[32900], 99.50th=[32900], 99.90th=[34866], 99.95th=[37487], 00:09:50.960 | 99.99th=[45876] 00:09:50.960 bw ( KiB/s): min=16384, max=24576, per=20.95%, avg=20480.00, stdev=5792.62, samples=2 00:09:50.960 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:09:50.960 lat (usec) : 750=0.01% 00:09:50.960 lat (msec) : 2=1.48%, 4=2.55%, 10=36.27%, 20=45.11%, 50=14.56% 00:09:50.960 lat (msec) : 100=0.02% 00:09:50.960 cpu : usr=3.89%, sys=5.48%, ctx=288, majf=0, minf=2 00:09:50.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:50.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.960 issued rwts: total=4922,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.960 00:09:50.960 Run status group 0 (all jobs): 00:09:50.960 READ: bw=91.0MiB/s (95.4MB/s), 12.0MiB/s-30.2MiB/s (12.5MB/s-31.6MB/s), io=91.6MiB (96.0MB), run=1003-1006msec 00:09:50.960 WRITE: bw=95.4MiB/s (100MB/s), 13.8MiB/s-31.8MiB/s (14.5MB/s-33.4MB/s), io=96.0MiB (101MB), run=1003-1006msec 00:09:50.960 00:09:50.960 Disk stats (read/write): 00:09:50.960 nvme0n1: ios=6311/6656, merge=0/0, ticks=39747/34957, in_queue=74704, util=87.58% 00:09:50.960 nvme0n2: ios=6174/6254, merge=0/0, ticks=36413/29370, in_queue=65783, util=94.80% 00:09:50.960 nvme0n3: ios=2373/2560, merge=0/0, ticks=24566/27722, in_queue=52288, util=100.00% 00:09:50.960 nvme0n4: ios=4348/4608, merge=0/0, ticks=45790/50725, in_queue=96515, util=89.41% 00:09:50.960 13:34:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:50.960 13:34:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=502014 00:09:50.960 13:34:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:50.960 13:34:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:50.960 [global] 00:09:50.960 thread=1 00:09:50.960 invalidate=1 00:09:50.960 rw=read 00:09:50.960 time_based=1 00:09:50.960 runtime=10 00:09:50.960 ioengine=libaio 00:09:50.960 direct=1 00:09:50.960 bs=4096 00:09:50.960 iodepth=1 00:09:50.960 norandommap=1 00:09:50.960 numjobs=1 00:09:50.960 00:09:50.960 [job0] 00:09:50.960 filename=/dev/nvme0n1 00:09:50.960 [job1] 00:09:50.960 filename=/dev/nvme0n2 00:09:50.960 [job2] 00:09:50.960 filename=/dev/nvme0n3 00:09:50.960 [job3] 00:09:50.960 filename=/dev/nvme0n4 00:09:50.960 Could not set queue depth (nvme0n1) 00:09:50.960 Could not set queue depth (nvme0n2) 00:09:50.960 Could not set queue depth (nvme0n3) 00:09:50.960 Could not set queue depth (nvme0n4) 00:09:51.233 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.233 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.233 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.233 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.233 fio-3.35 00:09:51.233 Starting 4 threads 00:09:53.775 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:54.036 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:54.036 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=630784, buflen=4096 00:09:54.036 fio: pid=502207, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.036 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.036 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:54.036 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=286720, buflen=4096 00:09:54.036 fio: pid=502206, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.297 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10452992, buflen=4096 00:09:54.297 fio: pid=502204, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.297 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.297 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:54.558 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.559 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:54.559 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=323584, buflen=4096 00:09:54.559 fio: pid=502205, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.559 00:09:54.559 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=502204: Wed Nov 6 13:34:17 2024 00:09:54.559 read: IOPS=867, BW=3470KiB/s (3553kB/s)(9.97MiB/2942msec) 00:09:54.559 slat (usec): min=6, max=26085, avg=47.94, stdev=632.90 00:09:54.559 clat (usec): min=532, max=1350, avg=1088.02, stdev=108.86 00:09:54.559 lat (usec): min=558, max=27038, avg=1135.97, stdev=640.07 00:09:54.559 clat percentiles (usec): 00:09:54.559 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 938], 20.00th=[ 1037], 00:09:54.559 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:09:54.559 | 70.00th=[ 1156], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:09:54.559 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1336], 99.95th=[ 1352], 00:09:54.559 | 99.99th=[ 1352] 00:09:54.559 bw ( KiB/s): min= 3480, max= 3528, per=96.14%, avg=3500.80, stdev=21.61, samples=5 00:09:54.559 iops : min= 870, max= 882, avg=875.20, stdev= 5.40, samples=5 00:09:54.559 lat (usec) : 750=1.21%, 1000=14.96% 00:09:54.559 lat (msec) : 2=83.78% 00:09:54.559 cpu : usr=1.02%, sys=2.52%, ctx=2558, majf=0, minf=1 00:09:54.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.559 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.559 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.559 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=502205: Wed Nov 6 13:34:17 2024 00:09:54.559 read: IOPS=25, BW=101KiB/s (103kB/s)(316KiB/3137msec) 00:09:54.559 slat (usec): min=25, max=16692, avg=440.40, stdev=2286.00 00:09:54.559 clat (usec): min=847, max=41954, avg=38975.04, stdev=8857.45 00:09:54.559 lat (usec): min=883, max=57933, avg=39349.73, stdev=9217.77 00:09:54.559 clat percentiles (usec): 00:09:54.559 | 1.00th=[ 848], 5.00th=[ 898], 10.00th=[40633], 20.00th=[41157], 00:09:54.559 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:54.559 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:54.559 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:54.559 | 99.99th=[42206] 00:09:54.559 bw ( KiB/s): min= 93, max= 112, per=2.75%, avg=100.83, stdev= 7.11, samples=6 00:09:54.559 iops : min= 23, max= 28, avg=25.17, stdev= 1.83, samples=6 00:09:54.559 lat (usec) : 1000=5.00% 00:09:54.559 lat (msec) : 50=93.75% 00:09:54.559 cpu : usr=0.13%, sys=0.00%, ctx=83, majf=0, minf=2 00:09:54.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.559 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.559 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.559 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=502206: Wed Nov 6 13:34:17 2024 00:09:54.559 read: IOPS=25, BW=101KiB/s (103kB/s)(280KiB/2774msec) 00:09:54.559 slat (usec): min=26, max=660, avg=36.42, stdev=75.34 00:09:54.559 clat (usec): min=550, max=41904, avg=39272.18, stdev=8205.14 00:09:54.559 lat (usec): min=614, max=42024, avg=39308.75, stdev=8205.33 00:09:54.559 clat percentiles (usec): 00:09:54.559 | 1.00th=[ 553], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:54.559 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:54.559 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:54.559 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:54.559 | 99.99th=[41681] 00:09:54.559 bw ( KiB/s): min= 96, max= 112, per=2.75%, avg=100.80, stdev= 7.16, samples=5 00:09:54.559 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:09:54.559 lat (usec) : 750=1.41%, 1000=2.82% 00:09:54.559 lat (msec) : 50=94.37% 00:09:54.559 cpu : usr=0.00%, sys=0.11%, ctx=73, majf=0, minf=2 00:09:54.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.559 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.559 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.559 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=502207: Wed Nov 6 13:34:17 2024 00:09:54.559 read: IOPS=58, BW=234KiB/s (239kB/s)(616KiB/2635msec) 00:09:54.559 slat (nsec): min=7663, max=54732, avg=26936.11, stdev=4019.04 00:09:54.559 clat (usec): min=798, max=42104, avg=16927.47, stdev=20020.74 00:09:54.559 lat (usec): min=806, max=42131, avg=16954.40, stdev=20020.81 00:09:54.559 clat percentiles (usec): 00:09:54.559 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 955], 00:09:54.559 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1045], 60.00th=[ 1188], 00:09:54.559 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:54.559 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:54.559 | 99.99th=[42206] 00:09:54.559 bw ( KiB/s): min= 96, max= 824, per=6.62%, avg=241.60, stdev=325.57, samples=5 00:09:54.559 iops : min= 24, max= 206, avg=60.40, stdev=81.39, samples=5 00:09:54.559 lat (usec) : 1000=34.84% 00:09:54.559 lat (msec) : 2=25.81%, 50=38.71% 00:09:54.559 cpu : usr=0.11%, sys=0.11%, ctx=157, majf=0, minf=2 00:09:54.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.559 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.559 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.559 00:09:54.559 Run status group 0 (all jobs): 00:09:54.559 READ: bw=3640KiB/s (3728kB/s), 101KiB/s-3470KiB/s (103kB/s-3553kB/s), io=11.2MiB (11.7MB), run=2635-3137msec 00:09:54.559 00:09:54.559 Disk stats (read/write): 00:09:54.559 nvme0n1: ios=2477/0, merge=0/0, ticks=2627/0, in_queue=2627, util=93.29% 00:09:54.559 nvme0n2: ios=113/0, merge=0/0, ticks=3139/0, in_queue=3139, util=97.27% 00:09:54.559 nvme0n3: ios=66/0, merge=0/0, ticks=2586/0, in_queue=2586, util=96.15% 00:09:54.559 nvme0n4: ios=188/0, merge=0/0, ticks=3061/0, in_queue=3061, util=100.00% 00:09:54.559 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.559 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:54.820 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.820 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:55.081 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.081 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:55.081 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.081 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:55.341 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:55.341 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 502014 00:09:55.341 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:55.341 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:55.602 nvmf hotplug test: fio failed as expected 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.602 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.602 rmmod nvme_tcp 00:09:55.862 rmmod nvme_fabrics 00:09:55.862 rmmod nvme_keyring 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 498340 ']' 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 498340 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 498340 ']' 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 498340 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 498340 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 498340' 00:09:55.862 killing process with pid 498340 00:09:55.862 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 498340 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 498340 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.863 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:58.407 00:09:58.407 real 0m29.216s 00:09:58.407 user 2m44.456s 00:09:58.407 sys 0m9.446s 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.407 ************************************ 00:09:58.407 END TEST nvmf_fio_target 00:09:58.407 ************************************ 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.407 ************************************ 00:09:58.407 START TEST nvmf_bdevio 00:09:58.407 ************************************ 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:58.407 * Looking for test storage... 00:09:58.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:58.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.407 --rc genhtml_branch_coverage=1 00:09:58.407 --rc genhtml_function_coverage=1 00:09:58.407 --rc genhtml_legend=1 00:09:58.407 --rc geninfo_all_blocks=1 00:09:58.407 --rc geninfo_unexecuted_blocks=1 00:09:58.407 00:09:58.407 ' 00:09:58.407 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:58.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.408 --rc genhtml_branch_coverage=1 00:09:58.408 --rc genhtml_function_coverage=1 00:09:58.408 --rc genhtml_legend=1 00:09:58.408 --rc geninfo_all_blocks=1 00:09:58.408 --rc geninfo_unexecuted_blocks=1 00:09:58.408 00:09:58.408 ' 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:58.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.408 --rc genhtml_branch_coverage=1 00:09:58.408 --rc genhtml_function_coverage=1 00:09:58.408 --rc genhtml_legend=1 00:09:58.408 --rc geninfo_all_blocks=1 00:09:58.408 --rc geninfo_unexecuted_blocks=1 00:09:58.408 00:09:58.408 ' 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:58.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.408 --rc genhtml_branch_coverage=1 00:09:58.408 --rc genhtml_function_coverage=1 00:09:58.408 --rc genhtml_legend=1 00:09:58.408 --rc geninfo_all_blocks=1 00:09:58.408 --rc geninfo_unexecuted_blocks=1 00:09:58.408 00:09:58.408 ' 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.408 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:06.543 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:06.543 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:06.543 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:06.543 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.543 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:10:06.544 00:10:06.544 --- 10.0.0.2 ping statistics --- 00:10:06.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.544 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:10:06.544 00:10:06.544 --- 10.0.0.1 ping statistics --- 00:10:06.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.544 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.544 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=507258 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 507258 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 507258 ']' 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.544 [2024-11-06 13:34:29.067342] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:10:06.544 [2024-11-06 13:34:29.067401] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.544 [2024-11-06 13:34:29.168887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.544 [2024-11-06 13:34:29.221847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.544 [2024-11-06 13:34:29.221902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.544 [2024-11-06 13:34:29.221911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.544 [2024-11-06 13:34:29.221919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.544 [2024-11-06 13:34:29.221926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.544 [2024-11-06 13:34:29.224319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:06.544 [2024-11-06 13:34:29.224482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:06.544 [2024-11-06 13:34:29.224515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:06.544 [2024-11-06 13:34:29.224518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.544 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.805 [2024-11-06 13:34:29.946776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.805 Malloc0 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.805 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.805 [2024-11-06 13:34:30.006885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.805 { 00:10:06.805 "params": { 00:10:06.805 "name": "Nvme$subsystem", 00:10:06.805 "trtype": "$TEST_TRANSPORT", 00:10:06.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.805 "adrfam": "ipv4", 00:10:06.805 "trsvcid": "$NVMF_PORT", 00:10:06.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.805 "hdgst": ${hdgst:-false}, 00:10:06.805 "ddgst": ${ddgst:-false} 00:10:06.805 }, 00:10:06.805 "method": "bdev_nvme_attach_controller" 00:10:06.805 } 00:10:06.805 EOF 00:10:06.805 )") 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:06.805 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.805 "params": { 00:10:06.805 "name": "Nvme1", 00:10:06.805 "trtype": "tcp", 00:10:06.805 "traddr": "10.0.0.2", 00:10:06.805 "adrfam": "ipv4", 00:10:06.805 "trsvcid": "4420", 00:10:06.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.805 "hdgst": false, 00:10:06.805 "ddgst": false 00:10:06.805 }, 00:10:06.805 "method": "bdev_nvme_attach_controller" 00:10:06.805 }' 00:10:06.805 [2024-11-06 13:34:30.063881] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:10:06.805 [2024-11-06 13:34:30.063954] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507596 ] 00:10:06.805 [2024-11-06 13:34:30.142741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.065 [2024-11-06 13:34:30.187020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.065 [2024-11-06 13:34:30.187181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.065 [2024-11-06 13:34:30.187185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.325 I/O targets: 00:10:07.325 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:07.325 00:10:07.325 00:10:07.325 CUnit - A unit testing framework for C - Version 2.1-3 00:10:07.325 http://cunit.sourceforge.net/ 00:10:07.325 00:10:07.325 00:10:07.325 Suite: bdevio tests on: Nvme1n1 00:10:07.325 Test: blockdev write read block ...passed 00:10:07.325 Test: blockdev write zeroes read block ...passed 00:10:07.325 Test: blockdev write zeroes read no split ...passed 00:10:07.325 Test: blockdev write zeroes read split ...passed 00:10:07.325 Test: blockdev write zeroes read split partial ...passed 00:10:07.325 Test: blockdev reset ...[2024-11-06 13:34:30.698095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:07.325 [2024-11-06 13:34:30.698166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1512970 (9): Bad file descriptor 00:10:07.585 [2024-11-06 13:34:30.759192] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:07.585 passed 00:10:07.585 Test: blockdev write read 8 blocks ...passed 00:10:07.585 Test: blockdev write read size > 128k ...passed 00:10:07.585 Test: blockdev write read invalid size ...passed 00:10:07.585 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:07.585 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:07.585 Test: blockdev write read max offset ...passed 00:10:07.585 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:07.585 Test: blockdev writev readv 8 blocks ...passed 00:10:07.585 Test: blockdev writev readv 30 x 1block ...passed 00:10:07.845 Test: blockdev writev readv block ...passed 00:10:07.845 Test: blockdev writev readv size > 128k ...passed 00:10:07.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:07.845 Test: blockdev comparev and writev ...[2024-11-06 13:34:30.983187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.846 [2024-11-06 13:34:30.983213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:30.983224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.846 [2024-11-06 13:34:30.983230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:30.983687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.846 [2024-11-06 13:34:30.983697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:30.983707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.846 [2024-11-06 13:34:30.983712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:30.984183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.846 [2024-11-06 13:34:30.984192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:30.984202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.846 [2024-11-06 13:34:30.984207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:30.984652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.846 [2024-11-06 13:34:30.984661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:30.984671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.846 [2024-11-06 13:34:30.984677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:07.846 passed 00:10:07.846 Test: blockdev nvme passthru rw ...passed 00:10:07.846 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:34:31.070605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.846 [2024-11-06 13:34:31.070616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:31.070951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.846 [2024-11-06 13:34:31.070960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:31.071287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.846 [2024-11-06 13:34:31.071295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:07.846 [2024-11-06 13:34:31.071633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.846 [2024-11-06 13:34:31.071642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:07.846 passed 00:10:07.846 Test: blockdev nvme admin passthru ...passed 00:10:07.846 Test: blockdev copy ...passed 00:10:07.846 00:10:07.846 Run Summary: Type Total Ran Passed Failed Inactive 00:10:07.846 suites 1 1 n/a 0 0 00:10:07.846 tests 23 23 23 0 0 00:10:07.846 asserts 152 152 152 0 n/a 00:10:07.846 00:10:07.846 Elapsed time = 1.289 seconds 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.106 rmmod nvme_tcp 00:10:08.106 rmmod nvme_fabrics 00:10:08.106 rmmod nvme_keyring 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 507258 ']' 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 507258 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 507258 ']' 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 507258 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 507258 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 507258' 00:10:08.106 killing process with pid 507258 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 507258 00:10:08.106 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 507258 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.366 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.276 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.276 00:10:10.276 real 0m12.217s 00:10:10.276 user 0m13.799s 00:10:10.276 sys 0m6.199s 00:10:10.277 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.277 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.277 ************************************ 00:10:10.277 END TEST nvmf_bdevio 00:10:10.277 ************************************ 00:10:10.277 13:34:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:10.277 00:10:10.277 real 5m2.723s 00:10:10.277 user 11m52.569s 00:10:10.277 sys 1m48.255s 00:10:10.277 13:34:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.277 13:34:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.277 ************************************ 00:10:10.277 END TEST nvmf_target_core 00:10:10.277 ************************************ 00:10:10.537 13:34:33 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:10.537 13:34:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:10.537 13:34:33 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.537 13:34:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:10.537 ************************************ 00:10:10.537 START TEST nvmf_target_extra 00:10:10.537 ************************************ 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:10.537 * Looking for test storage... 00:10:10.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:10.537 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:10.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.538 --rc genhtml_branch_coverage=1 00:10:10.538 --rc genhtml_function_coverage=1 00:10:10.538 --rc genhtml_legend=1 00:10:10.538 --rc geninfo_all_blocks=1 00:10:10.538 --rc geninfo_unexecuted_blocks=1 00:10:10.538 00:10:10.538 ' 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:10.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.538 --rc genhtml_branch_coverage=1 00:10:10.538 --rc genhtml_function_coverage=1 00:10:10.538 --rc genhtml_legend=1 00:10:10.538 --rc geninfo_all_blocks=1 00:10:10.538 --rc geninfo_unexecuted_blocks=1 00:10:10.538 00:10:10.538 ' 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:10.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.538 --rc genhtml_branch_coverage=1 00:10:10.538 --rc genhtml_function_coverage=1 00:10:10.538 --rc genhtml_legend=1 00:10:10.538 --rc geninfo_all_blocks=1 00:10:10.538 --rc geninfo_unexecuted_blocks=1 00:10:10.538 00:10:10.538 ' 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:10.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.538 --rc genhtml_branch_coverage=1 00:10:10.538 --rc genhtml_function_coverage=1 00:10:10.538 --rc genhtml_legend=1 00:10:10.538 --rc geninfo_all_blocks=1 00:10:10.538 --rc geninfo_unexecuted_blocks=1 00:10:10.538 00:10:10.538 ' 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.538 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.798 13:34:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:10.799 ************************************ 00:10:10.799 START TEST nvmf_example 00:10:10.799 ************************************ 00:10:10.799 13:34:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:10.799 * Looking for test storage... 00:10:10.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:10.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.799 --rc genhtml_branch_coverage=1 00:10:10.799 --rc genhtml_function_coverage=1 00:10:10.799 --rc genhtml_legend=1 00:10:10.799 --rc geninfo_all_blocks=1 00:10:10.799 --rc geninfo_unexecuted_blocks=1 00:10:10.799 00:10:10.799 ' 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:10.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.799 --rc genhtml_branch_coverage=1 00:10:10.799 --rc genhtml_function_coverage=1 00:10:10.799 --rc genhtml_legend=1 00:10:10.799 --rc geninfo_all_blocks=1 00:10:10.799 --rc geninfo_unexecuted_blocks=1 00:10:10.799 00:10:10.799 ' 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:10.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.799 --rc genhtml_branch_coverage=1 00:10:10.799 --rc genhtml_function_coverage=1 00:10:10.799 --rc genhtml_legend=1 00:10:10.799 --rc geninfo_all_blocks=1 00:10:10.799 --rc geninfo_unexecuted_blocks=1 00:10:10.799 00:10:10.799 ' 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:10.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.799 --rc genhtml_branch_coverage=1 00:10:10.799 --rc genhtml_function_coverage=1 00:10:10.799 --rc genhtml_legend=1 00:10:10.799 --rc geninfo_all_blocks=1 00:10:10.799 --rc geninfo_unexecuted_blocks=1 00:10:10.799 00:10:10.799 ' 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.799 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.060 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.061 13:34:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:19.205 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:19.206 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:19.206 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:19.206 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:19.206 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:19.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:10:19.206 00:10:19.206 --- 10.0.0.2 ping statistics --- 00:10:19.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.206 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:10:19.206 00:10:19.206 --- 10.0.0.1 ping statistics --- 00:10:19.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.206 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=512200 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 512200 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 512200 ']' 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:19.206 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.206 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:19.206 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:19.206 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:19.206 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:19.206 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.466 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.466 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.466 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.466 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.466 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:19.466 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.466 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.466 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.466 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:19.467 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:31.700 Initializing NVMe Controllers 00:10:31.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:31.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:31.700 Initialization complete. Launching workers. 00:10:31.700 ======================================================== 00:10:31.700 Latency(us) 00:10:31.700 Device Information : IOPS MiB/s Average min max 00:10:31.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17862.53 69.78 3584.32 696.81 16039.55 00:10:31.700 ======================================================== 00:10:31.700 Total : 17862.53 69.78 3584.32 696.81 16039.55 00:10:31.700 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.700 rmmod nvme_tcp 00:10:31.700 rmmod nvme_fabrics 00:10:31.700 rmmod nvme_keyring 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 512200 ']' 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 512200 00:10:31.700 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 512200 ']' 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 512200 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 512200 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 512200' 00:10:31.701 killing process with pid 512200 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 512200 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 512200 00:10:31.701 nvmf threads initialize successfully 00:10:31.701 bdev subsystem init successfully 00:10:31.701 created a nvmf target service 00:10:31.701 create targets's poll groups done 00:10:31.701 all subsystems of target started 00:10:31.701 nvmf target is running 00:10:31.701 all subsystems of target stopped 00:10:31.701 destroy targets's poll groups done 00:10:31.701 destroyed the nvmf target service 00:10:31.701 bdev subsystem finish successfully 00:10:31.701 nvmf threads destroy successfully 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.701 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.274 00:10:32.274 real 0m21.430s 00:10:32.274 user 0m47.005s 00:10:32.274 sys 0m6.860s 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.274 ************************************ 00:10:32.274 END TEST nvmf_example 00:10:32.274 ************************************ 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.274 ************************************ 00:10:32.274 START TEST nvmf_filesystem 00:10:32.274 ************************************ 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:32.274 * Looking for test storage... 00:10:32.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:32.274 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:32.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.539 --rc genhtml_branch_coverage=1 00:10:32.539 --rc genhtml_function_coverage=1 00:10:32.539 --rc genhtml_legend=1 00:10:32.539 --rc geninfo_all_blocks=1 00:10:32.539 --rc geninfo_unexecuted_blocks=1 00:10:32.539 00:10:32.539 ' 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:32.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.539 --rc genhtml_branch_coverage=1 00:10:32.539 --rc genhtml_function_coverage=1 00:10:32.539 --rc genhtml_legend=1 00:10:32.539 --rc geninfo_all_blocks=1 00:10:32.539 --rc geninfo_unexecuted_blocks=1 00:10:32.539 00:10:32.539 ' 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:32.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.539 --rc genhtml_branch_coverage=1 00:10:32.539 --rc genhtml_function_coverage=1 00:10:32.539 --rc genhtml_legend=1 00:10:32.539 --rc geninfo_all_blocks=1 00:10:32.539 --rc geninfo_unexecuted_blocks=1 00:10:32.539 00:10:32.539 ' 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:32.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.539 --rc genhtml_branch_coverage=1 00:10:32.539 --rc genhtml_function_coverage=1 00:10:32.539 --rc genhtml_legend=1 00:10:32.539 --rc geninfo_all_blocks=1 00:10:32.539 --rc geninfo_unexecuted_blocks=1 00:10:32.539 00:10:32.539 ' 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:32.539 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:32.540 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:32.540 #define SPDK_CONFIG_H 00:10:32.540 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:32.540 #define SPDK_CONFIG_APPS 1 00:10:32.540 #define SPDK_CONFIG_ARCH native 00:10:32.540 #undef SPDK_CONFIG_ASAN 00:10:32.540 #undef SPDK_CONFIG_AVAHI 00:10:32.540 #undef SPDK_CONFIG_CET 00:10:32.540 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:32.540 #define SPDK_CONFIG_COVERAGE 1 00:10:32.540 #define SPDK_CONFIG_CROSS_PREFIX 00:10:32.540 #undef SPDK_CONFIG_CRYPTO 00:10:32.540 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:32.540 #undef SPDK_CONFIG_CUSTOMOCF 00:10:32.540 #undef SPDK_CONFIG_DAOS 00:10:32.540 #define SPDK_CONFIG_DAOS_DIR 00:10:32.540 #define SPDK_CONFIG_DEBUG 1 00:10:32.540 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:32.540 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:32.540 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:32.540 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:32.540 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:32.540 #undef SPDK_CONFIG_DPDK_UADK 00:10:32.540 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:32.540 #define SPDK_CONFIG_EXAMPLES 1 00:10:32.540 #undef SPDK_CONFIG_FC 00:10:32.540 #define SPDK_CONFIG_FC_PATH 00:10:32.540 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:32.540 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:32.540 #define SPDK_CONFIG_FSDEV 1 00:10:32.540 #undef SPDK_CONFIG_FUSE 00:10:32.540 #undef SPDK_CONFIG_FUZZER 00:10:32.540 #define SPDK_CONFIG_FUZZER_LIB 00:10:32.540 #undef SPDK_CONFIG_GOLANG 00:10:32.540 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:32.540 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:32.540 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:32.540 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:32.540 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:32.540 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:32.540 #undef SPDK_CONFIG_HAVE_LZ4 00:10:32.540 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:32.540 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:32.540 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:32.540 #define SPDK_CONFIG_IDXD 1 00:10:32.541 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:32.541 #undef SPDK_CONFIG_IPSEC_MB 00:10:32.541 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:32.541 #define SPDK_CONFIG_ISAL 1 00:10:32.541 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:32.541 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:32.541 #define SPDK_CONFIG_LIBDIR 00:10:32.541 #undef SPDK_CONFIG_LTO 00:10:32.541 #define SPDK_CONFIG_MAX_LCORES 128 00:10:32.541 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:32.541 #define SPDK_CONFIG_NVME_CUSE 1 00:10:32.541 #undef SPDK_CONFIG_OCF 00:10:32.541 #define SPDK_CONFIG_OCF_PATH 00:10:32.541 #define SPDK_CONFIG_OPENSSL_PATH 00:10:32.541 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:32.541 #define SPDK_CONFIG_PGO_DIR 00:10:32.541 #undef SPDK_CONFIG_PGO_USE 00:10:32.541 #define SPDK_CONFIG_PREFIX /usr/local 00:10:32.541 #undef SPDK_CONFIG_RAID5F 00:10:32.541 #undef SPDK_CONFIG_RBD 00:10:32.541 #define SPDK_CONFIG_RDMA 1 00:10:32.541 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:32.541 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:32.541 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:32.541 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:32.541 #define SPDK_CONFIG_SHARED 1 00:10:32.541 #undef SPDK_CONFIG_SMA 00:10:32.541 #define SPDK_CONFIG_TESTS 1 00:10:32.541 #undef SPDK_CONFIG_TSAN 00:10:32.541 #define SPDK_CONFIG_UBLK 1 00:10:32.541 #define SPDK_CONFIG_UBSAN 1 00:10:32.541 #undef SPDK_CONFIG_UNIT_TESTS 00:10:32.541 #undef SPDK_CONFIG_URING 00:10:32.541 #define SPDK_CONFIG_URING_PATH 00:10:32.541 #undef SPDK_CONFIG_URING_ZNS 00:10:32.541 #undef SPDK_CONFIG_USDT 00:10:32.541 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:32.541 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:32.541 #define SPDK_CONFIG_VFIO_USER 1 00:10:32.541 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:32.541 #define SPDK_CONFIG_VHOST 1 00:10:32.541 #define SPDK_CONFIG_VIRTIO 1 00:10:32.541 #undef SPDK_CONFIG_VTUNE 00:10:32.541 #define SPDK_CONFIG_VTUNE_DIR 00:10:32.541 #define SPDK_CONFIG_WERROR 1 00:10:32.541 #define SPDK_CONFIG_WPDK_DIR 00:10:32.541 #undef SPDK_CONFIG_XNVME 00:10:32.541 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:32.541 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:32.542 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 515108 ]] 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 515108 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:32.543 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.dIm4ck 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.dIm4ck/tests/target /tmp/spdk.dIm4ck 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=118210686976 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356541952 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11145854976 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666902528 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847943168 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23367680 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677265408 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1007616 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:32.544 * Looking for test storage... 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=118210686976 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=13360447488 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:32.544 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:32.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.807 --rc genhtml_branch_coverage=1 00:10:32.807 --rc genhtml_function_coverage=1 00:10:32.807 --rc genhtml_legend=1 00:10:32.807 --rc geninfo_all_blocks=1 00:10:32.807 --rc geninfo_unexecuted_blocks=1 00:10:32.807 00:10:32.807 ' 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:32.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.807 --rc genhtml_branch_coverage=1 00:10:32.807 --rc genhtml_function_coverage=1 00:10:32.807 --rc genhtml_legend=1 00:10:32.807 --rc geninfo_all_blocks=1 00:10:32.807 --rc geninfo_unexecuted_blocks=1 00:10:32.807 00:10:32.807 ' 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:32.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.807 --rc genhtml_branch_coverage=1 00:10:32.807 --rc genhtml_function_coverage=1 00:10:32.807 --rc genhtml_legend=1 00:10:32.807 --rc geninfo_all_blocks=1 00:10:32.807 --rc geninfo_unexecuted_blocks=1 00:10:32.807 00:10:32.807 ' 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:32.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.807 --rc genhtml_branch_coverage=1 00:10:32.807 --rc genhtml_function_coverage=1 00:10:32.807 --rc genhtml_legend=1 00:10:32.807 --rc geninfo_all_blocks=1 00:10:32.807 --rc geninfo_unexecuted_blocks=1 00:10:32.807 00:10:32.807 ' 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.807 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.808 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:40.949 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.949 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.949 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:40.950 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:40.950 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:40.950 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:40.950 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.950 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.950 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.950 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.950 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.950 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:10:40.950 00:10:40.950 --- 10.0.0.2 ping statistics --- 00:10:40.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.950 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:10:40.950 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:10:40.950 00:10:40.950 --- 10.0.0.1 ping statistics --- 00:10:40.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.950 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:10:40.950 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.950 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 ************************************ 00:10:40.951 START TEST nvmf_filesystem_no_in_capsule 00:10:40.951 ************************************ 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=518745 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 518745 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 518745 ']' 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.951 [2024-11-06 13:35:03.273191] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:10:40.951 [2024-11-06 13:35:03.273254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.951 [2024-11-06 13:35:03.355300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.951 [2024-11-06 13:35:03.397376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.951 [2024-11-06 13:35:03.397410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.951 [2024-11-06 13:35:03.397418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.951 [2024-11-06 13:35:03.397425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.951 [2024-11-06 13:35:03.397431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.951 [2024-11-06 13:35:03.399032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.951 [2024-11-06 13:35:03.399148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.951 [2024-11-06 13:35:03.399305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.951 [2024-11-06 13:35:03.399305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 [2024-11-06 13:35:04.129896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 Malloc1 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 [2024-11-06 13:35:04.260400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.951 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:40.951 { 00:10:40.951 "name": "Malloc1", 00:10:40.951 "aliases": [ 00:10:40.951 "3704e6b2-c00c-4992-81f3-ef2b32f2b39b" 00:10:40.951 ], 00:10:40.951 "product_name": "Malloc disk", 00:10:40.951 "block_size": 512, 00:10:40.951 "num_blocks": 1048576, 00:10:40.951 "uuid": "3704e6b2-c00c-4992-81f3-ef2b32f2b39b", 00:10:40.951 "assigned_rate_limits": { 00:10:40.951 "rw_ios_per_sec": 0, 00:10:40.951 "rw_mbytes_per_sec": 0, 00:10:40.951 "r_mbytes_per_sec": 0, 00:10:40.951 "w_mbytes_per_sec": 0 00:10:40.951 }, 00:10:40.951 "claimed": true, 00:10:40.951 "claim_type": "exclusive_write", 00:10:40.951 "zoned": false, 00:10:40.951 "supported_io_types": { 00:10:40.951 "read": true, 00:10:40.951 "write": true, 00:10:40.951 "unmap": true, 00:10:40.951 "flush": true, 00:10:40.951 "reset": true, 00:10:40.951 "nvme_admin": false, 00:10:40.951 "nvme_io": false, 00:10:40.951 "nvme_io_md": false, 00:10:40.951 "write_zeroes": true, 00:10:40.951 "zcopy": true, 00:10:40.951 "get_zone_info": false, 00:10:40.951 "zone_management": false, 00:10:40.951 "zone_append": false, 00:10:40.951 "compare": false, 00:10:40.951 "compare_and_write": false, 00:10:40.951 "abort": true, 00:10:40.951 "seek_hole": false, 00:10:40.951 "seek_data": false, 00:10:40.951 "copy": true, 00:10:40.951 "nvme_iov_md": false 00:10:40.951 }, 00:10:40.952 "memory_domains": [ 00:10:40.952 { 00:10:40.952 "dma_device_id": "system", 00:10:40.952 "dma_device_type": 1 00:10:40.952 }, 00:10:40.952 { 00:10:40.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.952 "dma_device_type": 2 00:10:40.952 } 00:10:40.952 ], 00:10:40.952 "driver_specific": {} 00:10:40.952 } 00:10:40.952 ]' 00:10:40.952 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:41.212 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:41.212 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:41.212 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:41.212 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:41.212 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:41.212 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:41.212 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.595 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.595 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:42.595 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.595 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:42.595 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:44.508 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:44.509 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:44.509 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:44.769 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:45.034 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:45.978 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.920 ************************************ 00:10:46.920 START TEST filesystem_ext4 00:10:46.920 ************************************ 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:46.920 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:46.920 mke2fs 1.47.0 (5-Feb-2023) 00:10:46.920 Discarding device blocks: 0/522240 done 00:10:46.920 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:46.920 Filesystem UUID: 2ce6012c-357f-4ae8-8d3a-dcbecca54eba 00:10:46.920 Superblock backups stored on blocks: 00:10:46.920 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:46.920 00:10:46.920 Allocating group tables: 0/64 done 00:10:46.920 Writing inode tables: 0/64 done 00:10:47.180 Creating journal (8192 blocks): done 00:10:48.565 Writing superblocks and filesystem accounting information: 0/64 done 00:10:48.565 00:10:48.565 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:48.565 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 518745 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:55.231 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:55.232 00:10:55.232 real 0m7.453s 00:10:55.232 user 0m0.035s 00:10:55.232 sys 0m0.073s 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:55.232 ************************************ 00:10:55.232 END TEST filesystem_ext4 00:10:55.232 ************************************ 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.232 ************************************ 00:10:55.232 START TEST filesystem_btrfs 00:10:55.232 ************************************ 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:55.232 btrfs-progs v6.8.1 00:10:55.232 See https://btrfs.readthedocs.io for more information. 00:10:55.232 00:10:55.232 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:55.232 NOTE: several default settings have changed in version 5.15, please make sure 00:10:55.232 this does not affect your deployments: 00:10:55.232 - DUP for metadata (-m dup) 00:10:55.232 - enabled no-holes (-O no-holes) 00:10:55.232 - enabled free-space-tree (-R free-space-tree) 00:10:55.232 00:10:55.232 Label: (null) 00:10:55.232 UUID: 4149cfc6-1839-4ceb-9a80-45bdf3a7b223 00:10:55.232 Node size: 16384 00:10:55.232 Sector size: 4096 (CPU page size: 4096) 00:10:55.232 Filesystem size: 510.00MiB 00:10:55.232 Block group profiles: 00:10:55.232 Data: single 8.00MiB 00:10:55.232 Metadata: DUP 32.00MiB 00:10:55.232 System: DUP 8.00MiB 00:10:55.232 SSD detected: yes 00:10:55.232 Zoned device: no 00:10:55.232 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:55.232 Checksum: crc32c 00:10:55.232 Number of devices: 1 00:10:55.232 Devices: 00:10:55.232 ID SIZE PATH 00:10:55.232 1 510.00MiB /dev/nvme0n1p1 00:10:55.232 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:55.232 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 518745 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:55.493 00:10:55.493 real 0m1.267s 00:10:55.493 user 0m0.027s 00:10:55.493 sys 0m0.124s 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.493 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:55.493 ************************************ 00:10:55.493 END TEST filesystem_btrfs 00:10:55.493 ************************************ 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.755 ************************************ 00:10:55.755 START TEST filesystem_xfs 00:10:55.755 ************************************ 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:55.755 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:55.755 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:55.755 = sectsz=512 attr=2, projid32bit=1 00:10:55.755 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:55.755 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:55.755 data = bsize=4096 blocks=130560, imaxpct=25 00:10:55.755 = sunit=0 swidth=0 blks 00:10:55.755 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:55.755 log =internal log bsize=4096 blocks=16384, version=2 00:10:55.755 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:55.755 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:56.696 Discarding blocks...Done. 00:10:56.696 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:56.696 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 518745 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:58.609 00:10:58.609 real 0m2.759s 00:10:58.609 user 0m0.024s 00:10:58.609 sys 0m0.081s 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:58.609 ************************************ 00:10:58.609 END TEST filesystem_xfs 00:10:58.609 ************************************ 00:10:58.609 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 518745 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 518745 ']' 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 518745 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:58.870 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 518745 00:10:59.130 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:59.130 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:59.130 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 518745' 00:10:59.130 killing process with pid 518745 00:10:59.130 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 518745 00:10:59.130 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 518745 00:10:59.130 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:59.130 00:10:59.130 real 0m19.296s 00:10:59.130 user 1m16.272s 00:10:59.130 sys 0m1.472s 00:10:59.130 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.391 ************************************ 00:10:59.391 END TEST nvmf_filesystem_no_in_capsule 00:10:59.391 ************************************ 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.391 ************************************ 00:10:59.391 START TEST nvmf_filesystem_in_capsule 00:10:59.391 ************************************ 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=522674 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 522674 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 522674 ']' 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.391 13:35:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.391 [2024-11-06 13:35:22.653559] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:10:59.391 [2024-11-06 13:35:22.653611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.391 [2024-11-06 13:35:22.730817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.652 [2024-11-06 13:35:22.766108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.652 [2024-11-06 13:35:22.766140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.652 [2024-11-06 13:35:22.766149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.652 [2024-11-06 13:35:22.766156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.652 [2024-11-06 13:35:22.766162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.652 [2024-11-06 13:35:22.767804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.652 [2024-11-06 13:35:22.768000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.652 [2024-11-06 13:35:22.768002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.652 [2024-11-06 13:35:22.767858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.225 [2024-11-06 13:35:23.493394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.225 Malloc1 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.225 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.486 [2024-11-06 13:35:23.617183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:00.486 { 00:11:00.486 "name": "Malloc1", 00:11:00.486 "aliases": [ 00:11:00.486 "1ff45745-f893-4ac0-8fec-06d4a8693587" 00:11:00.486 ], 00:11:00.486 "product_name": "Malloc disk", 00:11:00.486 "block_size": 512, 00:11:00.486 "num_blocks": 1048576, 00:11:00.486 "uuid": "1ff45745-f893-4ac0-8fec-06d4a8693587", 00:11:00.486 "assigned_rate_limits": { 00:11:00.486 "rw_ios_per_sec": 0, 00:11:00.486 "rw_mbytes_per_sec": 0, 00:11:00.486 "r_mbytes_per_sec": 0, 00:11:00.486 "w_mbytes_per_sec": 0 00:11:00.486 }, 00:11:00.486 "claimed": true, 00:11:00.486 "claim_type": "exclusive_write", 00:11:00.486 "zoned": false, 00:11:00.486 "supported_io_types": { 00:11:00.486 "read": true, 00:11:00.486 "write": true, 00:11:00.486 "unmap": true, 00:11:00.486 "flush": true, 00:11:00.486 "reset": true, 00:11:00.486 "nvme_admin": false, 00:11:00.486 "nvme_io": false, 00:11:00.486 "nvme_io_md": false, 00:11:00.486 "write_zeroes": true, 00:11:00.486 "zcopy": true, 00:11:00.486 "get_zone_info": false, 00:11:00.486 "zone_management": false, 00:11:00.486 "zone_append": false, 00:11:00.486 "compare": false, 00:11:00.486 "compare_and_write": false, 00:11:00.486 "abort": true, 00:11:00.486 "seek_hole": false, 00:11:00.486 "seek_data": false, 00:11:00.486 "copy": true, 00:11:00.486 "nvme_iov_md": false 00:11:00.486 }, 00:11:00.486 "memory_domains": [ 00:11:00.486 { 00:11:00.486 "dma_device_id": "system", 00:11:00.486 "dma_device_type": 1 00:11:00.486 }, 00:11:00.486 { 00:11:00.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.486 "dma_device_type": 2 00:11:00.486 } 00:11:00.486 ], 00:11:00.486 "driver_specific": {} 00:11:00.486 } 00:11:00.486 ]' 00:11:00.486 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:00.487 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:00.487 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:00.487 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:00.487 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:00.487 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:00.487 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:00.487 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.401 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.401 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:02.401 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.401 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:02.401 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:04.316 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:04.577 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:05.147 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.090 ************************************ 00:11:06.090 START TEST filesystem_in_capsule_ext4 00:11:06.090 ************************************ 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:06.090 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:06.091 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:06.091 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:06.091 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:06.091 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:06.091 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:06.091 mke2fs 1.47.0 (5-Feb-2023) 00:11:06.091 Discarding device blocks: 0/522240 done 00:11:06.091 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:06.091 Filesystem UUID: 95f874ae-d690-4282-925c-e4eed1d8b465 00:11:06.091 Superblock backups stored on blocks: 00:11:06.091 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:06.091 00:11:06.091 Allocating group tables: 0/64 done 00:11:06.091 Writing inode tables: 0/64 done 00:11:09.397 Creating journal (8192 blocks): done 00:11:09.397 Writing superblocks and filesystem accounting information: 0/64 done 00:11:09.397 00:11:09.397 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:09.397 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 522674 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.986 00:11:15.986 real 0m8.878s 00:11:15.986 user 0m0.038s 00:11:15.986 sys 0m0.071s 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:15.986 ************************************ 00:11:15.986 END TEST filesystem_in_capsule_ext4 00:11:15.986 ************************************ 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.986 ************************************ 00:11:15.986 START TEST filesystem_in_capsule_btrfs 00:11:15.986 ************************************ 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:15.986 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:15.986 btrfs-progs v6.8.1 00:11:15.986 See https://btrfs.readthedocs.io for more information. 00:11:15.986 00:11:15.986 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:15.986 NOTE: several default settings have changed in version 5.15, please make sure 00:11:15.986 this does not affect your deployments: 00:11:15.986 - DUP for metadata (-m dup) 00:11:15.986 - enabled no-holes (-O no-holes) 00:11:15.986 - enabled free-space-tree (-R free-space-tree) 00:11:15.986 00:11:15.986 Label: (null) 00:11:15.986 UUID: 32f53b73-33b4-471d-9353-0f329c20f456 00:11:15.986 Node size: 16384 00:11:15.986 Sector size: 4096 (CPU page size: 4096) 00:11:15.986 Filesystem size: 510.00MiB 00:11:15.986 Block group profiles: 00:11:15.986 Data: single 8.00MiB 00:11:15.986 Metadata: DUP 32.00MiB 00:11:15.986 System: DUP 8.00MiB 00:11:15.986 SSD detected: yes 00:11:15.986 Zoned device: no 00:11:15.986 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:15.986 Checksum: crc32c 00:11:15.986 Number of devices: 1 00:11:15.986 Devices: 00:11:15.986 ID SIZE PATH 00:11:15.986 1 510.00MiB /dev/nvme0n1p1 00:11:15.986 00:11:15.987 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:15.987 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.987 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.987 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:15.987 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.987 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:15.987 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:15.987 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 522674 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.249 00:11:16.249 real 0m1.096s 00:11:16.249 user 0m0.032s 00:11:16.249 sys 0m0.117s 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.249 ************************************ 00:11:16.249 END TEST filesystem_in_capsule_btrfs 00:11:16.249 ************************************ 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.249 ************************************ 00:11:16.249 START TEST filesystem_in_capsule_xfs 00:11:16.249 ************************************ 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:16.249 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:17.191 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:17.191 = sectsz=512 attr=2, projid32bit=1 00:11:17.191 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:17.191 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:17.191 data = bsize=4096 blocks=130560, imaxpct=25 00:11:17.191 = sunit=0 swidth=0 blks 00:11:17.191 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:17.191 log =internal log bsize=4096 blocks=16384, version=2 00:11:17.191 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:17.191 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:18.133 Discarding blocks...Done. 00:11:18.133 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:18.133 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 522674 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.678 00:11:20.678 real 0m4.239s 00:11:20.678 user 0m0.033s 00:11:20.678 sys 0m0.075s 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:20.678 ************************************ 00:11:20.678 END TEST filesystem_in_capsule_xfs 00:11:20.678 ************************************ 00:11:20.678 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 522674 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 522674 ']' 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 522674 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:20.939 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 522674 00:11:21.199 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:21.199 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:21.199 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 522674' 00:11:21.199 killing process with pid 522674 00:11:21.199 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 522674 00:11:21.199 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 522674 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:21.460 00:11:21.460 real 0m21.998s 00:11:21.460 user 1m27.116s 00:11:21.460 sys 0m1.435s 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.460 ************************************ 00:11:21.460 END TEST nvmf_filesystem_in_capsule 00:11:21.460 ************************************ 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.460 rmmod nvme_tcp 00:11:21.460 rmmod nvme_fabrics 00:11:21.460 rmmod nvme_keyring 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.460 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.006 00:11:24.006 real 0m51.296s 00:11:24.006 user 2m45.590s 00:11:24.006 sys 0m8.668s 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.006 ************************************ 00:11:24.006 END TEST nvmf_filesystem 00:11:24.006 ************************************ 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.006 ************************************ 00:11:24.006 START TEST nvmf_target_discovery 00:11:24.006 ************************************ 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:24.006 * Looking for test storage... 00:11:24.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:24.006 13:35:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:24.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.006 --rc genhtml_branch_coverage=1 00:11:24.006 --rc genhtml_function_coverage=1 00:11:24.006 --rc genhtml_legend=1 00:11:24.006 --rc geninfo_all_blocks=1 00:11:24.006 --rc geninfo_unexecuted_blocks=1 00:11:24.006 00:11:24.006 ' 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:24.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.006 --rc genhtml_branch_coverage=1 00:11:24.006 --rc genhtml_function_coverage=1 00:11:24.006 --rc genhtml_legend=1 00:11:24.006 --rc geninfo_all_blocks=1 00:11:24.006 --rc geninfo_unexecuted_blocks=1 00:11:24.006 00:11:24.006 ' 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:24.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.006 --rc genhtml_branch_coverage=1 00:11:24.006 --rc genhtml_function_coverage=1 00:11:24.006 --rc genhtml_legend=1 00:11:24.006 --rc geninfo_all_blocks=1 00:11:24.006 --rc geninfo_unexecuted_blocks=1 00:11:24.006 00:11:24.006 ' 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:24.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.006 --rc genhtml_branch_coverage=1 00:11:24.006 --rc genhtml_function_coverage=1 00:11:24.006 --rc genhtml_legend=1 00:11:24.006 --rc geninfo_all_blocks=1 00:11:24.006 --rc geninfo_unexecuted_blocks=1 00:11:24.006 00:11:24.006 ' 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.006 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.007 13:35:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:32.152 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:32.152 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:32.152 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:32.152 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.152 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:11:32.153 00:11:32.153 --- 10.0.0.2 ping statistics --- 00:11:32.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.153 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:11:32.153 00:11:32.153 --- 10.0.0.1 ping statistics --- 00:11:32.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.153 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=531414 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 531414 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 531414 ']' 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:32.153 13:35:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.153 [2024-11-06 13:35:54.432342] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:11:32.153 [2024-11-06 13:35:54.432401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.153 [2024-11-06 13:35:54.513347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.153 [2024-11-06 13:35:54.554038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.153 [2024-11-06 13:35:54.554072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.153 [2024-11-06 13:35:54.554080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.153 [2024-11-06 13:35:54.554087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.153 [2024-11-06 13:35:54.554093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.153 [2024-11-06 13:35:54.555678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.153 [2024-11-06 13:35:54.555793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.153 [2024-11-06 13:35:54.556032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.153 [2024-11-06 13:35:54.556032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.153 [2024-11-06 13:35:55.278490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.153 Null1 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.153 [2024-11-06 13:35:55.322775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.153 Null2 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.153 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 Null3 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 Null4 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.154 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:32.416 00:11:32.416 Discovery Log Number of Records 6, Generation counter 6 00:11:32.416 =====Discovery Log Entry 0====== 00:11:32.416 trtype: tcp 00:11:32.416 adrfam: ipv4 00:11:32.416 subtype: current discovery subsystem 00:11:32.416 treq: not required 00:11:32.416 portid: 0 00:11:32.416 trsvcid: 4420 00:11:32.416 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:32.416 traddr: 10.0.0.2 00:11:32.416 eflags: explicit discovery connections, duplicate discovery information 00:11:32.416 sectype: none 00:11:32.416 =====Discovery Log Entry 1====== 00:11:32.416 trtype: tcp 00:11:32.416 adrfam: ipv4 00:11:32.416 subtype: nvme subsystem 00:11:32.416 treq: not required 00:11:32.416 portid: 0 00:11:32.416 trsvcid: 4420 00:11:32.416 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:32.416 traddr: 10.0.0.2 00:11:32.416 eflags: none 00:11:32.416 sectype: none 00:11:32.416 =====Discovery Log Entry 2====== 00:11:32.416 trtype: tcp 00:11:32.416 adrfam: ipv4 00:11:32.416 subtype: nvme subsystem 00:11:32.416 treq: not required 00:11:32.416 portid: 0 00:11:32.416 trsvcid: 4420 00:11:32.416 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:32.416 traddr: 10.0.0.2 00:11:32.416 eflags: none 00:11:32.416 sectype: none 00:11:32.416 =====Discovery Log Entry 3====== 00:11:32.416 trtype: tcp 00:11:32.416 adrfam: ipv4 00:11:32.416 subtype: nvme subsystem 00:11:32.416 treq: not required 00:11:32.416 portid: 0 00:11:32.416 trsvcid: 4420 00:11:32.416 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:32.416 traddr: 10.0.0.2 00:11:32.416 eflags: none 00:11:32.416 sectype: none 00:11:32.416 =====Discovery Log Entry 4====== 00:11:32.416 trtype: tcp 00:11:32.416 adrfam: ipv4 00:11:32.416 subtype: nvme subsystem 00:11:32.416 treq: not required 00:11:32.416 portid: 0 00:11:32.416 trsvcid: 4420 00:11:32.416 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:32.416 traddr: 10.0.0.2 00:11:32.416 eflags: none 00:11:32.416 sectype: none 00:11:32.416 =====Discovery Log Entry 5====== 00:11:32.416 trtype: tcp 00:11:32.416 adrfam: ipv4 00:11:32.416 subtype: discovery subsystem referral 00:11:32.416 treq: not required 00:11:32.416 portid: 0 00:11:32.416 trsvcid: 4430 00:11:32.416 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:32.416 traddr: 10.0.0.2 00:11:32.416 eflags: none 00:11:32.416 sectype: none 00:11:32.416 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:32.416 Perform nvmf subsystem discovery via RPC 00:11:32.416 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:32.416 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.416 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.416 [ 00:11:32.416 { 00:11:32.416 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:32.416 "subtype": "Discovery", 00:11:32.416 "listen_addresses": [ 00:11:32.416 { 00:11:32.416 "trtype": "TCP", 00:11:32.416 "adrfam": "IPv4", 00:11:32.416 "traddr": "10.0.0.2", 00:11:32.416 "trsvcid": "4420" 00:11:32.416 } 00:11:32.416 ], 00:11:32.416 "allow_any_host": true, 00:11:32.416 "hosts": [] 00:11:32.416 }, 00:11:32.416 { 00:11:32.416 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:32.416 "subtype": "NVMe", 00:11:32.416 "listen_addresses": [ 00:11:32.416 { 00:11:32.416 "trtype": "TCP", 00:11:32.416 "adrfam": "IPv4", 00:11:32.416 "traddr": "10.0.0.2", 00:11:32.416 "trsvcid": "4420" 00:11:32.416 } 00:11:32.416 ], 00:11:32.416 "allow_any_host": true, 00:11:32.416 "hosts": [], 00:11:32.416 "serial_number": "SPDK00000000000001", 00:11:32.416 "model_number": "SPDK bdev Controller", 00:11:32.416 "max_namespaces": 32, 00:11:32.416 "min_cntlid": 1, 00:11:32.416 "max_cntlid": 65519, 00:11:32.416 "namespaces": [ 00:11:32.416 { 00:11:32.416 "nsid": 1, 00:11:32.416 "bdev_name": "Null1", 00:11:32.416 "name": "Null1", 00:11:32.416 "nguid": "ECC7E5CADE4B464CABF4EED25D23E5A5", 00:11:32.416 "uuid": "ecc7e5ca-de4b-464c-abf4-eed25d23e5a5" 00:11:32.416 } 00:11:32.416 ] 00:11:32.416 }, 00:11:32.416 { 00:11:32.416 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:32.416 "subtype": "NVMe", 00:11:32.416 "listen_addresses": [ 00:11:32.416 { 00:11:32.416 "trtype": "TCP", 00:11:32.416 "adrfam": "IPv4", 00:11:32.416 "traddr": "10.0.0.2", 00:11:32.416 "trsvcid": "4420" 00:11:32.416 } 00:11:32.416 ], 00:11:32.416 "allow_any_host": true, 00:11:32.416 "hosts": [], 00:11:32.416 "serial_number": "SPDK00000000000002", 00:11:32.416 "model_number": "SPDK bdev Controller", 00:11:32.416 "max_namespaces": 32, 00:11:32.416 "min_cntlid": 1, 00:11:32.416 "max_cntlid": 65519, 00:11:32.416 "namespaces": [ 00:11:32.416 { 00:11:32.416 "nsid": 1, 00:11:32.416 "bdev_name": "Null2", 00:11:32.416 "name": "Null2", 00:11:32.416 "nguid": "A0048229310F40649FCC6983E133B238", 00:11:32.416 "uuid": "a0048229-310f-4064-9fcc-6983e133b238" 00:11:32.416 } 00:11:32.416 ] 00:11:32.416 }, 00:11:32.416 { 00:11:32.416 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:32.417 "subtype": "NVMe", 00:11:32.417 "listen_addresses": [ 00:11:32.417 { 00:11:32.417 "trtype": "TCP", 00:11:32.417 "adrfam": "IPv4", 00:11:32.417 "traddr": "10.0.0.2", 00:11:32.417 "trsvcid": "4420" 00:11:32.417 } 00:11:32.417 ], 00:11:32.417 "allow_any_host": true, 00:11:32.417 "hosts": [], 00:11:32.417 "serial_number": "SPDK00000000000003", 00:11:32.417 "model_number": "SPDK bdev Controller", 00:11:32.417 "max_namespaces": 32, 00:11:32.417 "min_cntlid": 1, 00:11:32.417 "max_cntlid": 65519, 00:11:32.417 "namespaces": [ 00:11:32.417 { 00:11:32.417 "nsid": 1, 00:11:32.417 "bdev_name": "Null3", 00:11:32.417 "name": "Null3", 00:11:32.417 "nguid": "952AA9BA3D644C37AB820D094EFA2906", 00:11:32.417 "uuid": "952aa9ba-3d64-4c37-ab82-0d094efa2906" 00:11:32.417 } 00:11:32.417 ] 00:11:32.417 }, 00:11:32.417 { 00:11:32.417 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:32.417 "subtype": "NVMe", 00:11:32.417 "listen_addresses": [ 00:11:32.417 { 00:11:32.417 "trtype": "TCP", 00:11:32.417 "adrfam": "IPv4", 00:11:32.417 "traddr": "10.0.0.2", 00:11:32.417 "trsvcid": "4420" 00:11:32.417 } 00:11:32.417 ], 00:11:32.417 "allow_any_host": true, 00:11:32.417 "hosts": [], 00:11:32.417 "serial_number": "SPDK00000000000004", 00:11:32.417 "model_number": "SPDK bdev Controller", 00:11:32.417 "max_namespaces": 32, 00:11:32.417 "min_cntlid": 1, 00:11:32.417 "max_cntlid": 65519, 00:11:32.417 "namespaces": [ 00:11:32.417 { 00:11:32.417 "nsid": 1, 00:11:32.417 "bdev_name": "Null4", 00:11:32.417 "name": "Null4", 00:11:32.417 "nguid": "150AE9D1FB6148B29B7B172CD24D12F3", 00:11:32.417 "uuid": "150ae9d1-fb61-48b2-9b7b-172cd24d12f3" 00:11:32.417 } 00:11:32.417 ] 00:11:32.417 } 00:11:32.417 ] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.417 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.678 rmmod nvme_tcp 00:11:32.678 rmmod nvme_fabrics 00:11:32.678 rmmod nvme_keyring 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 531414 ']' 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 531414 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 531414 ']' 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 531414 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 531414 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 531414' 00:11:32.678 killing process with pid 531414 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 531414 00:11:32.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 531414 00:11:32.678 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.678 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.678 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.678 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:32.678 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.678 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:32.678 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.940 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.940 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.940 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.940 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.940 13:35:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.856 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.856 00:11:34.856 real 0m11.275s 00:11:34.856 user 0m8.260s 00:11:34.856 sys 0m5.856s 00:11:34.856 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.856 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 ************************************ 00:11:34.856 END TEST nvmf_target_discovery 00:11:34.856 ************************************ 00:11:34.856 13:35:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:34.856 13:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:34.856 13:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.856 13:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.856 ************************************ 00:11:34.856 START TEST nvmf_referrals 00:11:34.856 ************************************ 00:11:34.856 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:35.118 * Looking for test storage... 00:11:35.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.118 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:35.118 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:35.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.119 --rc genhtml_branch_coverage=1 00:11:35.119 --rc genhtml_function_coverage=1 00:11:35.119 --rc genhtml_legend=1 00:11:35.119 --rc geninfo_all_blocks=1 00:11:35.119 --rc geninfo_unexecuted_blocks=1 00:11:35.119 00:11:35.119 ' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:35.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.119 --rc genhtml_branch_coverage=1 00:11:35.119 --rc genhtml_function_coverage=1 00:11:35.119 --rc genhtml_legend=1 00:11:35.119 --rc geninfo_all_blocks=1 00:11:35.119 --rc geninfo_unexecuted_blocks=1 00:11:35.119 00:11:35.119 ' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:35.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.119 --rc genhtml_branch_coverage=1 00:11:35.119 --rc genhtml_function_coverage=1 00:11:35.119 --rc genhtml_legend=1 00:11:35.119 --rc geninfo_all_blocks=1 00:11:35.119 --rc geninfo_unexecuted_blocks=1 00:11:35.119 00:11:35.119 ' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:35.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.119 --rc genhtml_branch_coverage=1 00:11:35.119 --rc genhtml_function_coverage=1 00:11:35.119 --rc genhtml_legend=1 00:11:35.119 --rc geninfo_all_blocks=1 00:11:35.119 --rc geninfo_unexecuted_blocks=1 00:11:35.119 00:11:35.119 ' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:35.119 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.120 13:35:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.326 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:43.327 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:43.327 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:43.327 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:43.327 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:11:43.327 00:11:43.327 --- 10.0.0.2 ping statistics --- 00:11:43.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.327 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:11:43.327 00:11:43.327 --- 10.0.0.1 ping statistics --- 00:11:43.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.327 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=536049 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 536049 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 536049 ']' 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:43.327 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.328 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:43.328 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.328 [2024-11-06 13:36:05.915452] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:11:43.328 [2024-11-06 13:36:05.915524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.328 [2024-11-06 13:36:05.998370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.328 [2024-11-06 13:36:06.040766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.328 [2024-11-06 13:36:06.040800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.328 [2024-11-06 13:36:06.040808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.328 [2024-11-06 13:36:06.040815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.328 [2024-11-06 13:36:06.040821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.328 [2024-11-06 13:36:06.042392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.328 [2024-11-06 13:36:06.042514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.328 [2024-11-06 13:36:06.042673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.328 [2024-11-06 13:36:06.042674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.589 [2024-11-06 13:36:06.769562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.589 [2024-11-06 13:36:06.785802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.589 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.851 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.113 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.114 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.114 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.375 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:44.375 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.375 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:44.375 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:44.375 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:44.375 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.375 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:44.637 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:44.637 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:44.637 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:44.637 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:44.637 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.637 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:44.897 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:44.897 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.897 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.898 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.159 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.419 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.680 rmmod nvme_tcp 00:11:45.680 rmmod nvme_fabrics 00:11:45.680 rmmod nvme_keyring 00:11:45.680 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.680 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:45.680 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:45.680 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 536049 ']' 00:11:45.680 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 536049 00:11:45.680 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 536049 ']' 00:11:45.680 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 536049 00:11:45.680 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:45.680 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:45.680 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 536049 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 536049' 00:11:45.941 killing process with pid 536049 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 536049 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 536049 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.941 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.492 00:11:48.492 real 0m13.057s 00:11:48.492 user 0m15.637s 00:11:48.492 sys 0m6.452s 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:48.492 ************************************ 00:11:48.492 END TEST nvmf_referrals 00:11:48.492 ************************************ 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.492 ************************************ 00:11:48.492 START TEST nvmf_connect_disconnect 00:11:48.492 ************************************ 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.492 * Looking for test storage... 00:11:48.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.492 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:48.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.493 --rc genhtml_branch_coverage=1 00:11:48.493 --rc genhtml_function_coverage=1 00:11:48.493 --rc genhtml_legend=1 00:11:48.493 --rc geninfo_all_blocks=1 00:11:48.493 --rc geninfo_unexecuted_blocks=1 00:11:48.493 00:11:48.493 ' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:48.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.493 --rc genhtml_branch_coverage=1 00:11:48.493 --rc genhtml_function_coverage=1 00:11:48.493 --rc genhtml_legend=1 00:11:48.493 --rc geninfo_all_blocks=1 00:11:48.493 --rc geninfo_unexecuted_blocks=1 00:11:48.493 00:11:48.493 ' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:48.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.493 --rc genhtml_branch_coverage=1 00:11:48.493 --rc genhtml_function_coverage=1 00:11:48.493 --rc genhtml_legend=1 00:11:48.493 --rc geninfo_all_blocks=1 00:11:48.493 --rc geninfo_unexecuted_blocks=1 00:11:48.493 00:11:48.493 ' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:48.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.493 --rc genhtml_branch_coverage=1 00:11:48.493 --rc genhtml_function_coverage=1 00:11:48.493 --rc genhtml_legend=1 00:11:48.493 --rc geninfo_all_blocks=1 00:11:48.493 --rc geninfo_unexecuted_blocks=1 00:11:48.493 00:11:48.493 ' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.493 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.753 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:56.754 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:56.754 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:56.754 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:56.754 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:11:56.754 00:11:56.754 --- 10.0.0.2 ping statistics --- 00:11:56.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.754 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:11:56.754 00:11:56.754 --- 10.0.0.1 ping statistics --- 00:11:56.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.754 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:11:56.754 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=541429 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 541429 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 541429 ']' 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:56.754 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.754 [2024-11-06 13:36:19.105972] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:11:56.754 [2024-11-06 13:36:19.106045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.755 [2024-11-06 13:36:19.189548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.755 [2024-11-06 13:36:19.231820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.755 [2024-11-06 13:36:19.231854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.755 [2024-11-06 13:36:19.231862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.755 [2024-11-06 13:36:19.231869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.755 [2024-11-06 13:36:19.231875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.755 [2024-11-06 13:36:19.233478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.755 [2024-11-06 13:36:19.233593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.755 [2024-11-06 13:36:19.233758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.755 [2024-11-06 13:36:19.233765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.755 [2024-11-06 13:36:19.960470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.755 13:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.755 [2024-11-06 13:36:20.031226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:56.755 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:01.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.246 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:15.246 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:15.246 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.246 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:15.246 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.246 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.247 rmmod nvme_tcp 00:12:15.247 rmmod nvme_fabrics 00:12:15.247 rmmod nvme_keyring 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 541429 ']' 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 541429 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 541429 ']' 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 541429 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 541429 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 541429' 00:12:15.247 killing process with pid 541429 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 541429 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 541429 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.247 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.788 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.788 00:12:17.788 real 0m29.325s 00:12:17.788 user 1m19.056s 00:12:17.788 sys 0m7.265s 00:12:17.788 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:17.788 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 ************************************ 00:12:17.788 END TEST nvmf_connect_disconnect 00:12:17.788 ************************************ 00:12:17.788 13:36:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:17.788 13:36:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:17.788 13:36:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:17.788 13:36:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 ************************************ 00:12:17.788 START TEST nvmf_multitarget 00:12:17.788 ************************************ 00:12:17.788 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:17.788 * Looking for test storage... 00:12:17.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.788 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:17.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.789 --rc genhtml_branch_coverage=1 00:12:17.789 --rc genhtml_function_coverage=1 00:12:17.789 --rc genhtml_legend=1 00:12:17.789 --rc geninfo_all_blocks=1 00:12:17.789 --rc geninfo_unexecuted_blocks=1 00:12:17.789 00:12:17.789 ' 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:17.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.789 --rc genhtml_branch_coverage=1 00:12:17.789 --rc genhtml_function_coverage=1 00:12:17.789 --rc genhtml_legend=1 00:12:17.789 --rc geninfo_all_blocks=1 00:12:17.789 --rc geninfo_unexecuted_blocks=1 00:12:17.789 00:12:17.789 ' 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:17.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.789 --rc genhtml_branch_coverage=1 00:12:17.789 --rc genhtml_function_coverage=1 00:12:17.789 --rc genhtml_legend=1 00:12:17.789 --rc geninfo_all_blocks=1 00:12:17.789 --rc geninfo_unexecuted_blocks=1 00:12:17.789 00:12:17.789 ' 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:17.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.789 --rc genhtml_branch_coverage=1 00:12:17.789 --rc genhtml_function_coverage=1 00:12:17.789 --rc genhtml_legend=1 00:12:17.789 --rc geninfo_all_blocks=1 00:12:17.789 --rc geninfo_unexecuted_blocks=1 00:12:17.789 00:12:17.789 ' 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.789 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.790 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:25.931 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:25.931 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:25.931 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:25.931 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:25.931 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.932 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:12:25.932 00:12:25.932 --- 10.0.0.2 ping statistics --- 00:12:25.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.932 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:12:25.932 00:12:25.932 --- 10.0.0.1 ping statistics --- 00:12:25.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.932 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=549453 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 549453 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 549453 ']' 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:25.932 13:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.932 [2024-11-06 13:36:48.405997] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:12:25.932 [2024-11-06 13:36:48.406065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.932 [2024-11-06 13:36:48.494056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.932 [2024-11-06 13:36:48.535450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.932 [2024-11-06 13:36:48.535490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.932 [2024-11-06 13:36:48.535499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.932 [2024-11-06 13:36:48.535506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.932 [2024-11-06 13:36:48.535512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.932 [2024-11-06 13:36:48.537035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.932 [2024-11-06 13:36:48.537064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.932 [2024-11-06 13:36:48.537221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.932 [2024-11-06 13:36:48.537221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.932 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:25.932 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:25.932 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.932 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:25.932 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.932 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.932 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:25.932 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.932 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:26.193 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:26.193 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:26.193 "nvmf_tgt_1" 00:12:26.193 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:26.193 "nvmf_tgt_2" 00:12:26.452 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.452 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:26.452 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:26.452 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:26.452 true 00:12:26.452 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:26.712 true 00:12:26.712 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.712 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.712 rmmod nvme_tcp 00:12:26.712 rmmod nvme_fabrics 00:12:26.712 rmmod nvme_keyring 00:12:26.712 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.713 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:26.713 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:26.713 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 549453 ']' 00:12:26.713 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 549453 00:12:26.713 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 549453 ']' 00:12:26.713 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 549453 00:12:26.713 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 549453 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 549453' 00:12:26.973 killing process with pid 549453 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 549453 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 549453 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.973 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.521 00:12:29.521 real 0m11.588s 00:12:29.521 user 0m9.926s 00:12:29.521 sys 0m5.959s 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.521 ************************************ 00:12:29.521 END TEST nvmf_multitarget 00:12:29.521 ************************************ 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.521 ************************************ 00:12:29.521 START TEST nvmf_rpc 00:12:29.521 ************************************ 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.521 * Looking for test storage... 00:12:29.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.521 --rc genhtml_branch_coverage=1 00:12:29.521 --rc genhtml_function_coverage=1 00:12:29.521 --rc genhtml_legend=1 00:12:29.521 --rc geninfo_all_blocks=1 00:12:29.521 --rc geninfo_unexecuted_blocks=1 00:12:29.521 00:12:29.521 ' 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.521 --rc genhtml_branch_coverage=1 00:12:29.521 --rc genhtml_function_coverage=1 00:12:29.521 --rc genhtml_legend=1 00:12:29.521 --rc geninfo_all_blocks=1 00:12:29.521 --rc geninfo_unexecuted_blocks=1 00:12:29.521 00:12:29.521 ' 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.521 --rc genhtml_branch_coverage=1 00:12:29.521 --rc genhtml_function_coverage=1 00:12:29.521 --rc genhtml_legend=1 00:12:29.521 --rc geninfo_all_blocks=1 00:12:29.521 --rc geninfo_unexecuted_blocks=1 00:12:29.521 00:12:29.521 ' 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.521 --rc genhtml_branch_coverage=1 00:12:29.521 --rc genhtml_function_coverage=1 00:12:29.521 --rc genhtml_legend=1 00:12:29.521 --rc geninfo_all_blocks=1 00:12:29.521 --rc geninfo_unexecuted_blocks=1 00:12:29.521 00:12:29.521 ' 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.521 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.522 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.116 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:36.117 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:36.117 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:36.117 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:36.117 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.117 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:12:36.379 00:12:36.379 --- 10.0.0.2 ping statistics --- 00:12:36.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.379 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:12:36.379 00:12:36.379 --- 10.0.0.1 ping statistics --- 00:12:36.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.379 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:36.379 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=554097 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 554097 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 554097 ']' 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.380 13:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.380 [2024-11-06 13:36:59.702955] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:12:36.380 [2024-11-06 13:36:59.703013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.641 [2024-11-06 13:36:59.782951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.641 [2024-11-06 13:36:59.822204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.641 [2024-11-06 13:36:59.822237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.641 [2024-11-06 13:36:59.822246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.641 [2024-11-06 13:36:59.822252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.641 [2024-11-06 13:36:59.822260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.641 [2024-11-06 13:36:59.823787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.641 [2024-11-06 13:36:59.823989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.641 [2024-11-06 13:36:59.824233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.641 [2024-11-06 13:36:59.824234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.212 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:37.212 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:37.212 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:37.212 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:37.212 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.212 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.213 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:37.213 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.213 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.213 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.213 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:37.213 "tick_rate": 2400000000, 00:12:37.213 "poll_groups": [ 00:12:37.213 { 00:12:37.213 "name": "nvmf_tgt_poll_group_000", 00:12:37.213 "admin_qpairs": 0, 00:12:37.213 "io_qpairs": 0, 00:12:37.213 "current_admin_qpairs": 0, 00:12:37.213 "current_io_qpairs": 0, 00:12:37.213 "pending_bdev_io": 0, 00:12:37.213 "completed_nvme_io": 0, 00:12:37.213 "transports": [] 00:12:37.213 }, 00:12:37.213 { 00:12:37.213 "name": "nvmf_tgt_poll_group_001", 00:12:37.213 "admin_qpairs": 0, 00:12:37.213 "io_qpairs": 0, 00:12:37.213 "current_admin_qpairs": 0, 00:12:37.213 "current_io_qpairs": 0, 00:12:37.213 "pending_bdev_io": 0, 00:12:37.213 "completed_nvme_io": 0, 00:12:37.213 "transports": [] 00:12:37.213 }, 00:12:37.213 { 00:12:37.213 "name": "nvmf_tgt_poll_group_002", 00:12:37.213 "admin_qpairs": 0, 00:12:37.213 "io_qpairs": 0, 00:12:37.213 "current_admin_qpairs": 0, 00:12:37.213 "current_io_qpairs": 0, 00:12:37.213 "pending_bdev_io": 0, 00:12:37.213 "completed_nvme_io": 0, 00:12:37.213 "transports": [] 00:12:37.213 }, 00:12:37.213 { 00:12:37.213 "name": "nvmf_tgt_poll_group_003", 00:12:37.213 "admin_qpairs": 0, 00:12:37.213 "io_qpairs": 0, 00:12:37.213 "current_admin_qpairs": 0, 00:12:37.213 "current_io_qpairs": 0, 00:12:37.213 "pending_bdev_io": 0, 00:12:37.213 "completed_nvme_io": 0, 00:12:37.213 "transports": [] 00:12:37.213 } 00:12:37.213 ] 00:12:37.213 }' 00:12:37.213 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:37.213 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:37.213 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:37.213 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.474 [2024-11-06 13:37:00.662655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:37.474 "tick_rate": 2400000000, 00:12:37.474 "poll_groups": [ 00:12:37.474 { 00:12:37.474 "name": "nvmf_tgt_poll_group_000", 00:12:37.474 "admin_qpairs": 0, 00:12:37.474 "io_qpairs": 0, 00:12:37.474 "current_admin_qpairs": 0, 00:12:37.474 "current_io_qpairs": 0, 00:12:37.474 "pending_bdev_io": 0, 00:12:37.474 "completed_nvme_io": 0, 00:12:37.474 "transports": [ 00:12:37.474 { 00:12:37.474 "trtype": "TCP" 00:12:37.474 } 00:12:37.474 ] 00:12:37.474 }, 00:12:37.474 { 00:12:37.474 "name": "nvmf_tgt_poll_group_001", 00:12:37.474 "admin_qpairs": 0, 00:12:37.474 "io_qpairs": 0, 00:12:37.474 "current_admin_qpairs": 0, 00:12:37.474 "current_io_qpairs": 0, 00:12:37.474 "pending_bdev_io": 0, 00:12:37.474 "completed_nvme_io": 0, 00:12:37.474 "transports": [ 00:12:37.474 { 00:12:37.474 "trtype": "TCP" 00:12:37.474 } 00:12:37.474 ] 00:12:37.474 }, 00:12:37.474 { 00:12:37.474 "name": "nvmf_tgt_poll_group_002", 00:12:37.474 "admin_qpairs": 0, 00:12:37.474 "io_qpairs": 0, 00:12:37.474 "current_admin_qpairs": 0, 00:12:37.474 "current_io_qpairs": 0, 00:12:37.474 "pending_bdev_io": 0, 00:12:37.474 "completed_nvme_io": 0, 00:12:37.474 "transports": [ 00:12:37.474 { 00:12:37.474 "trtype": "TCP" 00:12:37.474 } 00:12:37.474 ] 00:12:37.474 }, 00:12:37.474 { 00:12:37.474 "name": "nvmf_tgt_poll_group_003", 00:12:37.474 "admin_qpairs": 0, 00:12:37.474 "io_qpairs": 0, 00:12:37.474 "current_admin_qpairs": 0, 00:12:37.474 "current_io_qpairs": 0, 00:12:37.474 "pending_bdev_io": 0, 00:12:37.474 "completed_nvme_io": 0, 00:12:37.474 "transports": [ 00:12:37.474 { 00:12:37.474 "trtype": "TCP" 00:12:37.474 } 00:12:37.474 ] 00:12:37.474 } 00:12:37.474 ] 00:12:37.474 }' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.474 Malloc1 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.474 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.475 [2024-11-06 13:37:00.833086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:37.475 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:37.736 [2024-11-06 13:37:00.869964] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:37.736 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.736 could not add new controller: failed to write to nvme-fabrics device 00:12:37.736 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:37.736 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.736 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.736 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.736 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.736 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.736 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.736 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.736 13:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.120 13:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.120 13:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:39.120 13:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.120 13:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:39.120 13:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.668 [2024-11-06 13:37:04.687479] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:41.668 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:41.668 could not add new controller: failed to write to nvme-fabrics device 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.668 13:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.052 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.052 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:43.052 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.052 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:43.052 13:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:44.963 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:44.963 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:44.963 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.963 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:44.963 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.963 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:44.963 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.224 [2024-11-06 13:37:08.454517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.224 13:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.607 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.607 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:46.607 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.607 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:46.607 13:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:49.153 13:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:49.153 13:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:49.153 13:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.153 13:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:49.153 13:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.153 13:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:49.153 13:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.153 [2024-11-06 13:37:12.176815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.153 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.154 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.154 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.154 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.154 13:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.536 13:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.536 13:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:50.536 13:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.536 13:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:50.536 13:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:52.448 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:52.448 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:52.448 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.448 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:52.448 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.448 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:52.448 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.709 [2024-11-06 13:37:15.931074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.709 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.093 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.093 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:54.093 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.093 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:54.093 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.636 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.637 [2024-11-06 13:37:19.651608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.637 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.022 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.022 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:58.022 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.022 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:58.022 13:37:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:00.020 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:00.020 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:00.020 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.020 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:00.020 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.020 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:00.020 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.020 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.020 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.021 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.281 [2024-11-06 13:37:23.420614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.281 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.664 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.664 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:01.664 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.664 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:01.664 13:37:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:03.576 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:03.576 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:03.576 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.838 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:03.838 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.838 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:03.838 13:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.838 [2024-11-06 13:37:27.149130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.838 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.099 [2024-11-06 13:37:27.221301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.099 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 [2024-11-06 13:37:27.289474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 [2024-11-06 13:37:27.357679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 [2024-11-06 13:37:27.421921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.100 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:04.361 "tick_rate": 2400000000, 00:13:04.361 "poll_groups": [ 00:13:04.361 { 00:13:04.361 "name": "nvmf_tgt_poll_group_000", 00:13:04.361 "admin_qpairs": 0, 00:13:04.361 "io_qpairs": 224, 00:13:04.361 "current_admin_qpairs": 0, 00:13:04.361 "current_io_qpairs": 0, 00:13:04.361 "pending_bdev_io": 0, 00:13:04.361 "completed_nvme_io": 226, 00:13:04.361 "transports": [ 00:13:04.361 { 00:13:04.361 "trtype": "TCP" 00:13:04.361 } 00:13:04.361 ] 00:13:04.361 }, 00:13:04.361 { 00:13:04.361 "name": "nvmf_tgt_poll_group_001", 00:13:04.361 "admin_qpairs": 1, 00:13:04.361 "io_qpairs": 223, 00:13:04.361 "current_admin_qpairs": 0, 00:13:04.361 "current_io_qpairs": 0, 00:13:04.361 "pending_bdev_io": 0, 00:13:04.361 "completed_nvme_io": 273, 00:13:04.361 "transports": [ 00:13:04.361 { 00:13:04.361 "trtype": "TCP" 00:13:04.361 } 00:13:04.361 ] 00:13:04.361 }, 00:13:04.361 { 00:13:04.361 "name": "nvmf_tgt_poll_group_002", 00:13:04.361 "admin_qpairs": 6, 00:13:04.361 "io_qpairs": 218, 00:13:04.361 "current_admin_qpairs": 0, 00:13:04.361 "current_io_qpairs": 0, 00:13:04.361 "pending_bdev_io": 0, 00:13:04.361 "completed_nvme_io": 271, 00:13:04.361 "transports": [ 00:13:04.361 { 00:13:04.361 "trtype": "TCP" 00:13:04.361 } 00:13:04.361 ] 00:13:04.361 }, 00:13:04.361 { 00:13:04.361 "name": "nvmf_tgt_poll_group_003", 00:13:04.361 "admin_qpairs": 0, 00:13:04.361 "io_qpairs": 224, 00:13:04.361 "current_admin_qpairs": 0, 00:13:04.361 "current_io_qpairs": 0, 00:13:04.361 "pending_bdev_io": 0, 00:13:04.361 "completed_nvme_io": 469, 00:13:04.361 "transports": [ 00:13:04.361 { 00:13:04.361 "trtype": "TCP" 00:13:04.361 } 00:13:04.361 ] 00:13:04.361 } 00:13:04.361 ] 00:13:04.361 }' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.361 rmmod nvme_tcp 00:13:04.361 rmmod nvme_fabrics 00:13:04.361 rmmod nvme_keyring 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 554097 ']' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 554097 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 554097 ']' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 554097 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 554097 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 554097' 00:13:04.361 killing process with pid 554097 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 554097 00:13:04.361 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 554097 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.623 13:37:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.166 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.166 00:13:07.166 real 0m37.502s 00:13:07.166 user 1m53.774s 00:13:07.166 sys 0m7.559s 00:13:07.166 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:07.166 13:37:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.166 ************************************ 00:13:07.166 END TEST nvmf_rpc 00:13:07.166 ************************************ 00:13:07.166 13:37:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:07.166 13:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:07.167 13:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:07.167 13:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.167 ************************************ 00:13:07.167 START TEST nvmf_invalid 00:13:07.167 ************************************ 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:07.167 * Looking for test storage... 00:13:07.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:07.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.167 --rc genhtml_branch_coverage=1 00:13:07.167 --rc genhtml_function_coverage=1 00:13:07.167 --rc genhtml_legend=1 00:13:07.167 --rc geninfo_all_blocks=1 00:13:07.167 --rc geninfo_unexecuted_blocks=1 00:13:07.167 00:13:07.167 ' 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:07.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.167 --rc genhtml_branch_coverage=1 00:13:07.167 --rc genhtml_function_coverage=1 00:13:07.167 --rc genhtml_legend=1 00:13:07.167 --rc geninfo_all_blocks=1 00:13:07.167 --rc geninfo_unexecuted_blocks=1 00:13:07.167 00:13:07.167 ' 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:07.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.167 --rc genhtml_branch_coverage=1 00:13:07.167 --rc genhtml_function_coverage=1 00:13:07.167 --rc genhtml_legend=1 00:13:07.167 --rc geninfo_all_blocks=1 00:13:07.167 --rc geninfo_unexecuted_blocks=1 00:13:07.167 00:13:07.167 ' 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:07.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.167 --rc genhtml_branch_coverage=1 00:13:07.167 --rc genhtml_function_coverage=1 00:13:07.167 --rc genhtml_legend=1 00:13:07.167 --rc geninfo_all_blocks=1 00:13:07.167 --rc geninfo_unexecuted_blocks=1 00:13:07.167 00:13:07.167 ' 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.167 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.168 13:37:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:15.314 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:15.315 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:15.315 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:15.315 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:15.315 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:13:15.315 00:13:15.315 --- 10.0.0.2 ping statistics --- 00:13:15.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.315 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:13:15.315 00:13:15.315 --- 10.0.0.1 ping statistics --- 00:13:15.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.315 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=563745 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 563745 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 563745 ']' 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.315 13:37:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.315 [2024-11-06 13:37:37.690046] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:13:15.315 [2024-11-06 13:37:37.690117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.315 [2024-11-06 13:37:37.774080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.315 [2024-11-06 13:37:37.815983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.315 [2024-11-06 13:37:37.816018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.315 [2024-11-06 13:37:37.816026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.315 [2024-11-06 13:37:37.816033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.315 [2024-11-06 13:37:37.816043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.315 [2024-11-06 13:37:37.817998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.315 [2024-11-06 13:37:37.818191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.315 [2024-11-06 13:37:37.818348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.315 [2024-11-06 13:37:37.818349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.315 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:15.315 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:15.315 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.315 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.315 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.315 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.315 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:15.316 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25128 00:13:15.316 [2024-11-06 13:37:38.677595] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:15.578 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:15.578 { 00:13:15.578 "nqn": "nqn.2016-06.io.spdk:cnode25128", 00:13:15.578 "tgt_name": "foobar", 00:13:15.578 "method": "nvmf_create_subsystem", 00:13:15.578 "req_id": 1 00:13:15.578 } 00:13:15.578 Got JSON-RPC error response 00:13:15.578 response: 00:13:15.578 { 00:13:15.578 "code": -32603, 00:13:15.578 "message": "Unable to find target foobar" 00:13:15.578 }' 00:13:15.578 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:15.578 { 00:13:15.578 "nqn": "nqn.2016-06.io.spdk:cnode25128", 00:13:15.578 "tgt_name": "foobar", 00:13:15.578 "method": "nvmf_create_subsystem", 00:13:15.578 "req_id": 1 00:13:15.578 } 00:13:15.578 Got JSON-RPC error response 00:13:15.578 response: 00:13:15.578 { 00:13:15.578 "code": -32603, 00:13:15.578 "message": "Unable to find target foobar" 00:13:15.578 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:15.578 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:15.578 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15072 00:13:15.578 [2024-11-06 13:37:38.866278] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15072: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:15.578 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:15.578 { 00:13:15.578 "nqn": "nqn.2016-06.io.spdk:cnode15072", 00:13:15.578 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:15.579 "method": "nvmf_create_subsystem", 00:13:15.579 "req_id": 1 00:13:15.579 } 00:13:15.579 Got JSON-RPC error response 00:13:15.579 response: 00:13:15.579 { 00:13:15.579 "code": -32602, 00:13:15.579 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:15.579 }' 00:13:15.579 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:15.579 { 00:13:15.579 "nqn": "nqn.2016-06.io.spdk:cnode15072", 00:13:15.579 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:15.579 "method": "nvmf_create_subsystem", 00:13:15.579 "req_id": 1 00:13:15.579 } 00:13:15.579 Got JSON-RPC error response 00:13:15.579 response: 00:13:15.579 { 00:13:15.579 "code": -32602, 00:13:15.579 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:15.579 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:15.579 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:15.579 13:37:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8819 00:13:15.841 [2024-11-06 13:37:39.058883] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8819: invalid model number 'SPDK_Controller' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:15.841 { 00:13:15.841 "nqn": "nqn.2016-06.io.spdk:cnode8819", 00:13:15.841 "model_number": "SPDK_Controller\u001f", 00:13:15.841 "method": "nvmf_create_subsystem", 00:13:15.841 "req_id": 1 00:13:15.841 } 00:13:15.841 Got JSON-RPC error response 00:13:15.841 response: 00:13:15.841 { 00:13:15.841 "code": -32602, 00:13:15.841 "message": "Invalid MN SPDK_Controller\u001f" 00:13:15.841 }' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:15.841 { 00:13:15.841 "nqn": "nqn.2016-06.io.spdk:cnode8819", 00:13:15.841 "model_number": "SPDK_Controller\u001f", 00:13:15.841 "method": "nvmf_create_subsystem", 00:13:15.841 "req_id": 1 00:13:15.841 } 00:13:15.841 Got JSON-RPC error response 00:13:15.841 response: 00:13:15.841 { 00:13:15.841 "code": -32602, 00:13:15.841 "message": "Invalid MN SPDK_Controller\u001f" 00:13:15.841 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:15.841 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.842 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.103 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:16.103 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:16.103 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:16.103 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.103 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.103 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:16.103 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:16.103 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"/kFc[CnsCXFU6;i'\''9C2\' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '"/kFc[CnsCXFU6;i'\''9C2\' nqn.2016-06.io.spdk:cnode2597 00:13:16.104 [2024-11-06 13:37:39.416041] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2597: invalid serial number '"/kFc[CnsCXFU6;i'9C2\' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:16.104 { 00:13:16.104 "nqn": "nqn.2016-06.io.spdk:cnode2597", 00:13:16.104 "serial_number": "\"/kFc[CnsCXFU6;i'\''9C2\\", 00:13:16.104 "method": "nvmf_create_subsystem", 00:13:16.104 "req_id": 1 00:13:16.104 } 00:13:16.104 Got JSON-RPC error response 00:13:16.104 response: 00:13:16.104 { 00:13:16.104 "code": -32602, 00:13:16.104 "message": "Invalid SN \"/kFc[CnsCXFU6;i'\''9C2\\" 00:13:16.104 }' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:16.104 { 00:13:16.104 "nqn": "nqn.2016-06.io.spdk:cnode2597", 00:13:16.104 "serial_number": "\"/kFc[CnsCXFU6;i'9C2\\", 00:13:16.104 "method": "nvmf_create_subsystem", 00:13:16.104 "req_id": 1 00:13:16.104 } 00:13:16.104 Got JSON-RPC error response 00:13:16.104 response: 00:13:16.104 { 00:13:16.104 "code": -32602, 00:13:16.104 "message": "Invalid SN \"/kFc[CnsCXFU6;i'9C2\\" 00:13:16.104 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.104 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.366 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.367 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ F == \- ]] 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'F&w+4`_w@CwZU~8_LBcR$7%=+#-X7k]g2VY5B @M#' 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'F&w+4`_w@CwZU~8_LBcR$7%=+#-X7k]g2VY5B @M#' nqn.2016-06.io.spdk:cnode23433 00:13:16.628 [2024-11-06 13:37:39.925641] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23433: invalid model number 'F&w+4`_w@CwZU~8_LBcR$7%=+#-X7k]g2VY5B @M#' 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:16.628 { 00:13:16.628 "nqn": "nqn.2016-06.io.spdk:cnode23433", 00:13:16.628 "model_number": "F&w+4`_w@CwZU~8_LBcR$7%=+#-X7k]g2VY5B @M#", 00:13:16.628 "method": "nvmf_create_subsystem", 00:13:16.628 "req_id": 1 00:13:16.628 } 00:13:16.628 Got JSON-RPC error response 00:13:16.628 response: 00:13:16.628 { 00:13:16.628 "code": -32602, 00:13:16.628 "message": "Invalid MN F&w+4`_w@CwZU~8_LBcR$7%=+#-X7k]g2VY5B @M#" 00:13:16.628 }' 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:16.628 { 00:13:16.628 "nqn": "nqn.2016-06.io.spdk:cnode23433", 00:13:16.628 "model_number": "F&w+4`_w@CwZU~8_LBcR$7%=+#-X7k]g2VY5B @M#", 00:13:16.628 "method": "nvmf_create_subsystem", 00:13:16.628 "req_id": 1 00:13:16.628 } 00:13:16.628 Got JSON-RPC error response 00:13:16.628 response: 00:13:16.628 { 00:13:16.628 "code": -32602, 00:13:16.628 "message": "Invalid MN F&w+4`_w@CwZU~8_LBcR$7%=+#-X7k]g2VY5B @M#" 00:13:16.628 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:16.628 13:37:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:16.888 [2024-11-06 13:37:40.110587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.888 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:17.149 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:17.149 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:17.149 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:17.149 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:17.149 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:17.149 [2024-11-06 13:37:40.483714] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:17.149 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:17.149 { 00:13:17.149 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:17.149 "listen_address": { 00:13:17.149 "trtype": "tcp", 00:13:17.149 "traddr": "", 00:13:17.149 "trsvcid": "4421" 00:13:17.149 }, 00:13:17.149 "method": "nvmf_subsystem_remove_listener", 00:13:17.149 "req_id": 1 00:13:17.149 } 00:13:17.149 Got JSON-RPC error response 00:13:17.149 response: 00:13:17.149 { 00:13:17.149 "code": -32602, 00:13:17.149 "message": "Invalid parameters" 00:13:17.149 }' 00:13:17.149 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:17.149 { 00:13:17.149 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:17.149 "listen_address": { 00:13:17.149 "trtype": "tcp", 00:13:17.149 "traddr": "", 00:13:17.149 "trsvcid": "4421" 00:13:17.149 }, 00:13:17.149 "method": "nvmf_subsystem_remove_listener", 00:13:17.149 "req_id": 1 00:13:17.149 } 00:13:17.149 Got JSON-RPC error response 00:13:17.149 response: 00:13:17.149 { 00:13:17.149 "code": -32602, 00:13:17.149 "message": "Invalid parameters" 00:13:17.149 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:17.149 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31723 -i 0 00:13:17.409 [2024-11-06 13:37:40.664251] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31723: invalid cntlid range [0-65519] 00:13:17.409 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:17.409 { 00:13:17.409 "nqn": "nqn.2016-06.io.spdk:cnode31723", 00:13:17.409 "min_cntlid": 0, 00:13:17.409 "method": "nvmf_create_subsystem", 00:13:17.409 "req_id": 1 00:13:17.409 } 00:13:17.409 Got JSON-RPC error response 00:13:17.409 response: 00:13:17.409 { 00:13:17.409 "code": -32602, 00:13:17.409 "message": "Invalid cntlid range [0-65519]" 00:13:17.409 }' 00:13:17.409 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:17.409 { 00:13:17.409 "nqn": "nqn.2016-06.io.spdk:cnode31723", 00:13:17.409 "min_cntlid": 0, 00:13:17.409 "method": "nvmf_create_subsystem", 00:13:17.409 "req_id": 1 00:13:17.409 } 00:13:17.409 Got JSON-RPC error response 00:13:17.409 response: 00:13:17.409 { 00:13:17.409 "code": -32602, 00:13:17.409 "message": "Invalid cntlid range [0-65519]" 00:13:17.409 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:17.409 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29038 -i 65520 00:13:17.670 [2024-11-06 13:37:40.844848] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29038: invalid cntlid range [65520-65519] 00:13:17.670 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:17.670 { 00:13:17.670 "nqn": "nqn.2016-06.io.spdk:cnode29038", 00:13:17.670 "min_cntlid": 65520, 00:13:17.670 "method": "nvmf_create_subsystem", 00:13:17.670 "req_id": 1 00:13:17.670 } 00:13:17.670 Got JSON-RPC error response 00:13:17.670 response: 00:13:17.670 { 00:13:17.670 "code": -32602, 00:13:17.670 "message": "Invalid cntlid range [65520-65519]" 00:13:17.670 }' 00:13:17.670 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:17.670 { 00:13:17.670 "nqn": "nqn.2016-06.io.spdk:cnode29038", 00:13:17.670 "min_cntlid": 65520, 00:13:17.670 "method": "nvmf_create_subsystem", 00:13:17.670 "req_id": 1 00:13:17.670 } 00:13:17.670 Got JSON-RPC error response 00:13:17.670 response: 00:13:17.670 { 00:13:17.670 "code": -32602, 00:13:17.670 "message": "Invalid cntlid range [65520-65519]" 00:13:17.670 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:17.670 13:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30494 -I 0 00:13:17.670 [2024-11-06 13:37:41.021359] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30494: invalid cntlid range [1-0] 00:13:17.931 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:17.931 { 00:13:17.931 "nqn": "nqn.2016-06.io.spdk:cnode30494", 00:13:17.931 "max_cntlid": 0, 00:13:17.931 "method": "nvmf_create_subsystem", 00:13:17.931 "req_id": 1 00:13:17.931 } 00:13:17.931 Got JSON-RPC error response 00:13:17.931 response: 00:13:17.931 { 00:13:17.931 "code": -32602, 00:13:17.931 "message": "Invalid cntlid range [1-0]" 00:13:17.931 }' 00:13:17.931 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:17.931 { 00:13:17.931 "nqn": "nqn.2016-06.io.spdk:cnode30494", 00:13:17.931 "max_cntlid": 0, 00:13:17.931 "method": "nvmf_create_subsystem", 00:13:17.931 "req_id": 1 00:13:17.931 } 00:13:17.931 Got JSON-RPC error response 00:13:17.931 response: 00:13:17.931 { 00:13:17.931 "code": -32602, 00:13:17.931 "message": "Invalid cntlid range [1-0]" 00:13:17.931 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:17.931 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10784 -I 65520 00:13:17.931 [2024-11-06 13:37:41.209946] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10784: invalid cntlid range [1-65520] 00:13:17.931 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:17.931 { 00:13:17.931 "nqn": "nqn.2016-06.io.spdk:cnode10784", 00:13:17.931 "max_cntlid": 65520, 00:13:17.931 "method": "nvmf_create_subsystem", 00:13:17.931 "req_id": 1 00:13:17.931 } 00:13:17.931 Got JSON-RPC error response 00:13:17.931 response: 00:13:17.931 { 00:13:17.931 "code": -32602, 00:13:17.931 "message": "Invalid cntlid range [1-65520]" 00:13:17.931 }' 00:13:17.931 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:17.931 { 00:13:17.931 "nqn": "nqn.2016-06.io.spdk:cnode10784", 00:13:17.931 "max_cntlid": 65520, 00:13:17.931 "method": "nvmf_create_subsystem", 00:13:17.931 "req_id": 1 00:13:17.931 } 00:13:17.931 Got JSON-RPC error response 00:13:17.931 response: 00:13:17.931 { 00:13:17.931 "code": -32602, 00:13:17.931 "message": "Invalid cntlid range [1-65520]" 00:13:17.931 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:17.931 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22078 -i 6 -I 5 00:13:18.193 [2024-11-06 13:37:41.394522] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22078: invalid cntlid range [6-5] 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:18.193 { 00:13:18.193 "nqn": "nqn.2016-06.io.spdk:cnode22078", 00:13:18.193 "min_cntlid": 6, 00:13:18.193 "max_cntlid": 5, 00:13:18.193 "method": "nvmf_create_subsystem", 00:13:18.193 "req_id": 1 00:13:18.193 } 00:13:18.193 Got JSON-RPC error response 00:13:18.193 response: 00:13:18.193 { 00:13:18.193 "code": -32602, 00:13:18.193 "message": "Invalid cntlid range [6-5]" 00:13:18.193 }' 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:18.193 { 00:13:18.193 "nqn": "nqn.2016-06.io.spdk:cnode22078", 00:13:18.193 "min_cntlid": 6, 00:13:18.193 "max_cntlid": 5, 00:13:18.193 "method": "nvmf_create_subsystem", 00:13:18.193 "req_id": 1 00:13:18.193 } 00:13:18.193 Got JSON-RPC error response 00:13:18.193 response: 00:13:18.193 { 00:13:18.193 "code": -32602, 00:13:18.193 "message": "Invalid cntlid range [6-5]" 00:13:18.193 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:18.193 { 00:13:18.193 "name": "foobar", 00:13:18.193 "method": "nvmf_delete_target", 00:13:18.193 "req_id": 1 00:13:18.193 } 00:13:18.193 Got JSON-RPC error response 00:13:18.193 response: 00:13:18.193 { 00:13:18.193 "code": -32602, 00:13:18.193 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:18.193 }' 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:18.193 { 00:13:18.193 "name": "foobar", 00:13:18.193 "method": "nvmf_delete_target", 00:13:18.193 "req_id": 1 00:13:18.193 } 00:13:18.193 Got JSON-RPC error response 00:13:18.193 response: 00:13:18.193 { 00:13:18.193 "code": -32602, 00:13:18.193 "message": "The specified target doesn't exist, cannot delete it." 00:13:18.193 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.193 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.193 rmmod nvme_tcp 00:13:18.193 rmmod nvme_fabrics 00:13:18.454 rmmod nvme_keyring 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 563745 ']' 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 563745 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 563745 ']' 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 563745 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 563745 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 563745' 00:13:18.454 killing process with pid 563745 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 563745 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 563745 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.454 13:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.999 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.999 00:13:20.999 real 0m13.864s 00:13:20.999 user 0m20.392s 00:13:20.999 sys 0m6.516s 00:13:20.999 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:20.999 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:20.999 ************************************ 00:13:20.999 END TEST nvmf_invalid 00:13:20.999 ************************************ 00:13:20.999 13:37:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:20.999 13:37:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:20.999 13:37:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:20.999 13:37:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.999 ************************************ 00:13:20.999 START TEST nvmf_connect_stress 00:13:20.999 ************************************ 00:13:20.999 13:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:20.999 * Looking for test storage... 00:13:20.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.999 --rc genhtml_branch_coverage=1 00:13:20.999 --rc genhtml_function_coverage=1 00:13:20.999 --rc genhtml_legend=1 00:13:20.999 --rc geninfo_all_blocks=1 00:13:20.999 --rc geninfo_unexecuted_blocks=1 00:13:20.999 00:13:20.999 ' 00:13:20.999 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.999 --rc genhtml_branch_coverage=1 00:13:21.000 --rc genhtml_function_coverage=1 00:13:21.000 --rc genhtml_legend=1 00:13:21.000 --rc geninfo_all_blocks=1 00:13:21.000 --rc geninfo_unexecuted_blocks=1 00:13:21.000 00:13:21.000 ' 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:21.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.000 --rc genhtml_branch_coverage=1 00:13:21.000 --rc genhtml_function_coverage=1 00:13:21.000 --rc genhtml_legend=1 00:13:21.000 --rc geninfo_all_blocks=1 00:13:21.000 --rc geninfo_unexecuted_blocks=1 00:13:21.000 00:13:21.000 ' 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:21.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.000 --rc genhtml_branch_coverage=1 00:13:21.000 --rc genhtml_function_coverage=1 00:13:21.000 --rc genhtml_legend=1 00:13:21.000 --rc geninfo_all_blocks=1 00:13:21.000 --rc geninfo_unexecuted_blocks=1 00:13:21.000 00:13:21.000 ' 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.000 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:29.146 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:29.146 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:29.146 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:29.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.146 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:13:29.147 00:13:29.147 --- 10.0.0.2 ping statistics --- 00:13:29.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.147 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:13:29.147 00:13:29.147 --- 10.0.0.1 ping statistics --- 00:13:29.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.147 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=568854 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 568854 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 568854 ']' 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:29.147 13:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.147 [2024-11-06 13:37:51.475017] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:13:29.147 [2024-11-06 13:37:51.475073] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.147 [2024-11-06 13:37:51.571562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:29.147 [2024-11-06 13:37:51.617074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.147 [2024-11-06 13:37:51.617126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.147 [2024-11-06 13:37:51.617135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.147 [2024-11-06 13:37:51.617142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.147 [2024-11-06 13:37:51.617148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.147 [2024-11-06 13:37:51.619084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.147 [2024-11-06 13:37:51.619223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.147 [2024-11-06 13:37:51.619224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.147 [2024-11-06 13:37:52.312396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.147 [2024-11-06 13:37:52.336819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.147 NULL1 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=569204 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.147 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.148 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.409 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.409 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:29.409 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.409 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.409 13:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.981 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.981 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:29.981 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.981 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.981 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.242 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.242 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:30.242 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.242 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.242 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.503 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.503 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:30.503 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.503 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.503 13:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.765 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.765 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:30.765 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.765 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.765 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.336 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.336 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:31.336 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.336 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.336 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.596 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.596 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:31.596 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.596 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.596 13:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.858 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.858 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:31.858 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.858 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.858 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.118 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.118 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:32.118 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.118 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.118 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.379 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.379 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:32.379 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.379 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.379 13:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.950 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.950 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:32.950 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.950 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.950 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.210 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.210 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:33.210 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.210 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.210 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.471 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.471 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:33.471 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.471 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.471 13:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.732 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.732 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:33.732 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.732 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.732 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.992 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.992 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:33.992 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.992 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.992 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.562 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.562 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:34.562 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.562 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.562 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.822 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.822 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:34.822 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.822 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.822 13:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.083 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.083 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:35.083 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.083 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.083 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.344 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.344 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:35.344 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.344 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.344 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.606 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.606 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:35.606 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.606 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.606 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.178 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.178 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:36.178 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.178 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.178 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.439 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.439 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:36.439 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.439 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.439 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.752 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.752 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:36.752 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.752 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.752 13:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.013 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.013 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:37.013 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.014 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.014 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.275 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.275 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:37.275 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.275 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.275 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.846 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.846 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:37.846 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.846 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.846 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.108 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.108 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:38.108 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.108 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.108 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.369 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.369 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:38.369 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.369 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.369 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.629 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.629 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:38.629 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.629 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.629 13:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.889 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.889 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:38.889 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.889 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.889 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.149 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:39.410 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.410 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 569204 00:13:39.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (569204) - No such process 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 569204 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:39.411 rmmod nvme_tcp 00:13:39.411 rmmod nvme_fabrics 00:13:39.411 rmmod nvme_keyring 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 568854 ']' 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 568854 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 568854 ']' 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 568854 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 568854 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 568854' 00:13:39.411 killing process with pid 568854 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 568854 00:13:39.411 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 568854 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.672 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.585 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:41.585 00:13:41.585 real 0m20.933s 00:13:41.585 user 0m42.132s 00:13:41.585 sys 0m8.983s 00:13:41.585 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:41.585 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.585 ************************************ 00:13:41.585 END TEST nvmf_connect_stress 00:13:41.585 ************************************ 00:13:41.585 13:38:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:41.585 13:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:41.585 13:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:41.585 13:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:41.846 ************************************ 00:13:41.846 START TEST nvmf_fused_ordering 00:13:41.846 ************************************ 00:13:41.846 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:41.846 * Looking for test storage... 00:13:41.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.846 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:41.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.847 --rc genhtml_branch_coverage=1 00:13:41.847 --rc genhtml_function_coverage=1 00:13:41.847 --rc genhtml_legend=1 00:13:41.847 --rc geninfo_all_blocks=1 00:13:41.847 --rc geninfo_unexecuted_blocks=1 00:13:41.847 00:13:41.847 ' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:41.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.847 --rc genhtml_branch_coverage=1 00:13:41.847 --rc genhtml_function_coverage=1 00:13:41.847 --rc genhtml_legend=1 00:13:41.847 --rc geninfo_all_blocks=1 00:13:41.847 --rc geninfo_unexecuted_blocks=1 00:13:41.847 00:13:41.847 ' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:41.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.847 --rc genhtml_branch_coverage=1 00:13:41.847 --rc genhtml_function_coverage=1 00:13:41.847 --rc genhtml_legend=1 00:13:41.847 --rc geninfo_all_blocks=1 00:13:41.847 --rc geninfo_unexecuted_blocks=1 00:13:41.847 00:13:41.847 ' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:41.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.847 --rc genhtml_branch_coverage=1 00:13:41.847 --rc genhtml_function_coverage=1 00:13:41.847 --rc genhtml_legend=1 00:13:41.847 --rc geninfo_all_blocks=1 00:13:41.847 --rc geninfo_unexecuted_blocks=1 00:13:41.847 00:13:41.847 ' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:41.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:41.847 13:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:49.988 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.988 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:49.989 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:49.989 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:49.989 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.989 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:49.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:13:49.989 00:13:49.989 --- 10.0.0.2 ping statistics --- 00:13:49.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.989 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:13:49.989 00:13:49.989 --- 10.0.0.1 ping statistics --- 00:13:49.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.989 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=575234 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 575234 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 575234 ']' 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.989 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:49.989 [2024-11-06 13:38:12.257624] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:13:49.989 [2024-11-06 13:38:12.257682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.989 [2024-11-06 13:38:12.356693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.989 [2024-11-06 13:38:12.407272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.989 [2024-11-06 13:38:12.407321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.989 [2024-11-06 13:38:12.407335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.989 [2024-11-06 13:38:12.407342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.989 [2024-11-06 13:38:12.407348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.990 [2024-11-06 13:38:12.408131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.990 [2024-11-06 13:38:13.114013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.990 [2024-11-06 13:38:13.130291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.990 NULL1 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.990 13:38:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:49.990 [2024-11-06 13:38:13.185803] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:13:49.990 [2024-11-06 13:38:13.185857] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575582 ] 00:13:50.560 Attached to nqn.2016-06.io.spdk:cnode1 00:13:50.561 Namespace ID: 1 size: 1GB 00:13:50.561 fused_ordering(0) 00:13:50.561 fused_ordering(1) 00:13:50.561 fused_ordering(2) 00:13:50.561 fused_ordering(3) 00:13:50.561 fused_ordering(4) 00:13:50.561 fused_ordering(5) 00:13:50.561 fused_ordering(6) 00:13:50.561 fused_ordering(7) 00:13:50.561 fused_ordering(8) 00:13:50.561 fused_ordering(9) 00:13:50.561 fused_ordering(10) 00:13:50.561 fused_ordering(11) 00:13:50.561 fused_ordering(12) 00:13:50.561 fused_ordering(13) 00:13:50.561 fused_ordering(14) 00:13:50.561 fused_ordering(15) 00:13:50.561 fused_ordering(16) 00:13:50.561 fused_ordering(17) 00:13:50.561 fused_ordering(18) 00:13:50.561 fused_ordering(19) 00:13:50.561 fused_ordering(20) 00:13:50.561 fused_ordering(21) 00:13:50.561 fused_ordering(22) 00:13:50.561 fused_ordering(23) 00:13:50.561 fused_ordering(24) 00:13:50.561 fused_ordering(25) 00:13:50.561 fused_ordering(26) 00:13:50.561 fused_ordering(27) 00:13:50.561 fused_ordering(28) 00:13:50.561 fused_ordering(29) 00:13:50.561 fused_ordering(30) 00:13:50.561 fused_ordering(31) 00:13:50.561 fused_ordering(32) 00:13:50.561 fused_ordering(33) 00:13:50.561 fused_ordering(34) 00:13:50.561 fused_ordering(35) 00:13:50.561 fused_ordering(36) 00:13:50.561 fused_ordering(37) 00:13:50.561 fused_ordering(38) 00:13:50.561 fused_ordering(39) 00:13:50.561 fused_ordering(40) 00:13:50.561 fused_ordering(41) 00:13:50.561 fused_ordering(42) 00:13:50.561 fused_ordering(43) 00:13:50.561 fused_ordering(44) 00:13:50.561 fused_ordering(45) 00:13:50.561 fused_ordering(46) 00:13:50.561 fused_ordering(47) 00:13:50.561 fused_ordering(48) 00:13:50.561 fused_ordering(49) 00:13:50.561 fused_ordering(50) 00:13:50.561 fused_ordering(51) 00:13:50.561 fused_ordering(52) 00:13:50.561 fused_ordering(53) 00:13:50.561 fused_ordering(54) 00:13:50.561 fused_ordering(55) 00:13:50.561 fused_ordering(56) 00:13:50.561 fused_ordering(57) 00:13:50.561 fused_ordering(58) 00:13:50.561 fused_ordering(59) 00:13:50.561 fused_ordering(60) 00:13:50.561 fused_ordering(61) 00:13:50.561 fused_ordering(62) 00:13:50.561 fused_ordering(63) 00:13:50.561 fused_ordering(64) 00:13:50.561 fused_ordering(65) 00:13:50.561 fused_ordering(66) 00:13:50.561 fused_ordering(67) 00:13:50.561 fused_ordering(68) 00:13:50.561 fused_ordering(69) 00:13:50.561 fused_ordering(70) 00:13:50.561 fused_ordering(71) 00:13:50.561 fused_ordering(72) 00:13:50.561 fused_ordering(73) 00:13:50.561 fused_ordering(74) 00:13:50.561 fused_ordering(75) 00:13:50.561 fused_ordering(76) 00:13:50.561 fused_ordering(77) 00:13:50.561 fused_ordering(78) 00:13:50.561 fused_ordering(79) 00:13:50.561 fused_ordering(80) 00:13:50.561 fused_ordering(81) 00:13:50.561 fused_ordering(82) 00:13:50.561 fused_ordering(83) 00:13:50.561 fused_ordering(84) 00:13:50.561 fused_ordering(85) 00:13:50.561 fused_ordering(86) 00:13:50.561 fused_ordering(87) 00:13:50.561 fused_ordering(88) 00:13:50.561 fused_ordering(89) 00:13:50.561 fused_ordering(90) 00:13:50.561 fused_ordering(91) 00:13:50.561 fused_ordering(92) 00:13:50.561 fused_ordering(93) 00:13:50.561 fused_ordering(94) 00:13:50.561 fused_ordering(95) 00:13:50.561 fused_ordering(96) 00:13:50.561 fused_ordering(97) 00:13:50.561 fused_ordering(98) 00:13:50.561 fused_ordering(99) 00:13:50.561 fused_ordering(100) 00:13:50.561 fused_ordering(101) 00:13:50.561 fused_ordering(102) 00:13:50.561 fused_ordering(103) 00:13:50.561 fused_ordering(104) 00:13:50.561 fused_ordering(105) 00:13:50.561 fused_ordering(106) 00:13:50.561 fused_ordering(107) 00:13:50.561 fused_ordering(108) 00:13:50.561 fused_ordering(109) 00:13:50.561 fused_ordering(110) 00:13:50.561 fused_ordering(111) 00:13:50.561 fused_ordering(112) 00:13:50.561 fused_ordering(113) 00:13:50.561 fused_ordering(114) 00:13:50.561 fused_ordering(115) 00:13:50.561 fused_ordering(116) 00:13:50.561 fused_ordering(117) 00:13:50.561 fused_ordering(118) 00:13:50.561 fused_ordering(119) 00:13:50.561 fused_ordering(120) 00:13:50.561 fused_ordering(121) 00:13:50.561 fused_ordering(122) 00:13:50.561 fused_ordering(123) 00:13:50.561 fused_ordering(124) 00:13:50.561 fused_ordering(125) 00:13:50.561 fused_ordering(126) 00:13:50.561 fused_ordering(127) 00:13:50.561 fused_ordering(128) 00:13:50.561 fused_ordering(129) 00:13:50.561 fused_ordering(130) 00:13:50.561 fused_ordering(131) 00:13:50.561 fused_ordering(132) 00:13:50.561 fused_ordering(133) 00:13:50.561 fused_ordering(134) 00:13:50.561 fused_ordering(135) 00:13:50.561 fused_ordering(136) 00:13:50.561 fused_ordering(137) 00:13:50.561 fused_ordering(138) 00:13:50.561 fused_ordering(139) 00:13:50.561 fused_ordering(140) 00:13:50.561 fused_ordering(141) 00:13:50.561 fused_ordering(142) 00:13:50.561 fused_ordering(143) 00:13:50.561 fused_ordering(144) 00:13:50.561 fused_ordering(145) 00:13:50.561 fused_ordering(146) 00:13:50.561 fused_ordering(147) 00:13:50.561 fused_ordering(148) 00:13:50.561 fused_ordering(149) 00:13:50.561 fused_ordering(150) 00:13:50.561 fused_ordering(151) 00:13:50.561 fused_ordering(152) 00:13:50.561 fused_ordering(153) 00:13:50.561 fused_ordering(154) 00:13:50.561 fused_ordering(155) 00:13:50.561 fused_ordering(156) 00:13:50.561 fused_ordering(157) 00:13:50.561 fused_ordering(158) 00:13:50.561 fused_ordering(159) 00:13:50.561 fused_ordering(160) 00:13:50.561 fused_ordering(161) 00:13:50.561 fused_ordering(162) 00:13:50.561 fused_ordering(163) 00:13:50.561 fused_ordering(164) 00:13:50.561 fused_ordering(165) 00:13:50.561 fused_ordering(166) 00:13:50.561 fused_ordering(167) 00:13:50.561 fused_ordering(168) 00:13:50.561 fused_ordering(169) 00:13:50.561 fused_ordering(170) 00:13:50.561 fused_ordering(171) 00:13:50.561 fused_ordering(172) 00:13:50.561 fused_ordering(173) 00:13:50.561 fused_ordering(174) 00:13:50.561 fused_ordering(175) 00:13:50.561 fused_ordering(176) 00:13:50.561 fused_ordering(177) 00:13:50.561 fused_ordering(178) 00:13:50.561 fused_ordering(179) 00:13:50.561 fused_ordering(180) 00:13:50.561 fused_ordering(181) 00:13:50.561 fused_ordering(182) 00:13:50.561 fused_ordering(183) 00:13:50.561 fused_ordering(184) 00:13:50.561 fused_ordering(185) 00:13:50.561 fused_ordering(186) 00:13:50.561 fused_ordering(187) 00:13:50.561 fused_ordering(188) 00:13:50.561 fused_ordering(189) 00:13:50.561 fused_ordering(190) 00:13:50.561 fused_ordering(191) 00:13:50.561 fused_ordering(192) 00:13:50.561 fused_ordering(193) 00:13:50.561 fused_ordering(194) 00:13:50.561 fused_ordering(195) 00:13:50.561 fused_ordering(196) 00:13:50.561 fused_ordering(197) 00:13:50.561 fused_ordering(198) 00:13:50.561 fused_ordering(199) 00:13:50.561 fused_ordering(200) 00:13:50.561 fused_ordering(201) 00:13:50.561 fused_ordering(202) 00:13:50.561 fused_ordering(203) 00:13:50.561 fused_ordering(204) 00:13:50.561 fused_ordering(205) 00:13:50.822 fused_ordering(206) 00:13:50.822 fused_ordering(207) 00:13:50.822 fused_ordering(208) 00:13:50.822 fused_ordering(209) 00:13:50.822 fused_ordering(210) 00:13:50.822 fused_ordering(211) 00:13:50.822 fused_ordering(212) 00:13:50.822 fused_ordering(213) 00:13:50.822 fused_ordering(214) 00:13:50.822 fused_ordering(215) 00:13:50.822 fused_ordering(216) 00:13:50.822 fused_ordering(217) 00:13:50.822 fused_ordering(218) 00:13:50.822 fused_ordering(219) 00:13:50.822 fused_ordering(220) 00:13:50.822 fused_ordering(221) 00:13:50.822 fused_ordering(222) 00:13:50.822 fused_ordering(223) 00:13:50.822 fused_ordering(224) 00:13:50.822 fused_ordering(225) 00:13:50.822 fused_ordering(226) 00:13:50.822 fused_ordering(227) 00:13:50.822 fused_ordering(228) 00:13:50.822 fused_ordering(229) 00:13:50.822 fused_ordering(230) 00:13:50.822 fused_ordering(231) 00:13:50.822 fused_ordering(232) 00:13:50.822 fused_ordering(233) 00:13:50.822 fused_ordering(234) 00:13:50.822 fused_ordering(235) 00:13:50.822 fused_ordering(236) 00:13:50.822 fused_ordering(237) 00:13:50.822 fused_ordering(238) 00:13:50.822 fused_ordering(239) 00:13:50.822 fused_ordering(240) 00:13:50.822 fused_ordering(241) 00:13:50.822 fused_ordering(242) 00:13:50.822 fused_ordering(243) 00:13:50.822 fused_ordering(244) 00:13:50.822 fused_ordering(245) 00:13:50.822 fused_ordering(246) 00:13:50.822 fused_ordering(247) 00:13:50.822 fused_ordering(248) 00:13:50.822 fused_ordering(249) 00:13:50.822 fused_ordering(250) 00:13:50.822 fused_ordering(251) 00:13:50.822 fused_ordering(252) 00:13:50.822 fused_ordering(253) 00:13:50.822 fused_ordering(254) 00:13:50.822 fused_ordering(255) 00:13:50.822 fused_ordering(256) 00:13:50.822 fused_ordering(257) 00:13:50.822 fused_ordering(258) 00:13:50.822 fused_ordering(259) 00:13:50.822 fused_ordering(260) 00:13:50.822 fused_ordering(261) 00:13:50.822 fused_ordering(262) 00:13:50.822 fused_ordering(263) 00:13:50.822 fused_ordering(264) 00:13:50.822 fused_ordering(265) 00:13:50.822 fused_ordering(266) 00:13:50.822 fused_ordering(267) 00:13:50.822 fused_ordering(268) 00:13:50.822 fused_ordering(269) 00:13:50.822 fused_ordering(270) 00:13:50.822 fused_ordering(271) 00:13:50.822 fused_ordering(272) 00:13:50.822 fused_ordering(273) 00:13:50.822 fused_ordering(274) 00:13:50.822 fused_ordering(275) 00:13:50.822 fused_ordering(276) 00:13:50.822 fused_ordering(277) 00:13:50.822 fused_ordering(278) 00:13:50.822 fused_ordering(279) 00:13:50.822 fused_ordering(280) 00:13:50.822 fused_ordering(281) 00:13:50.822 fused_ordering(282) 00:13:50.822 fused_ordering(283) 00:13:50.822 fused_ordering(284) 00:13:50.822 fused_ordering(285) 00:13:50.822 fused_ordering(286) 00:13:50.822 fused_ordering(287) 00:13:50.822 fused_ordering(288) 00:13:50.822 fused_ordering(289) 00:13:50.822 fused_ordering(290) 00:13:50.822 fused_ordering(291) 00:13:50.822 fused_ordering(292) 00:13:50.822 fused_ordering(293) 00:13:50.822 fused_ordering(294) 00:13:50.822 fused_ordering(295) 00:13:50.822 fused_ordering(296) 00:13:50.822 fused_ordering(297) 00:13:50.822 fused_ordering(298) 00:13:50.822 fused_ordering(299) 00:13:50.822 fused_ordering(300) 00:13:50.822 fused_ordering(301) 00:13:50.822 fused_ordering(302) 00:13:50.822 fused_ordering(303) 00:13:50.822 fused_ordering(304) 00:13:50.822 fused_ordering(305) 00:13:50.822 fused_ordering(306) 00:13:50.822 fused_ordering(307) 00:13:50.822 fused_ordering(308) 00:13:50.822 fused_ordering(309) 00:13:50.822 fused_ordering(310) 00:13:50.822 fused_ordering(311) 00:13:50.822 fused_ordering(312) 00:13:50.822 fused_ordering(313) 00:13:50.822 fused_ordering(314) 00:13:50.822 fused_ordering(315) 00:13:50.822 fused_ordering(316) 00:13:50.822 fused_ordering(317) 00:13:50.822 fused_ordering(318) 00:13:50.822 fused_ordering(319) 00:13:50.822 fused_ordering(320) 00:13:50.822 fused_ordering(321) 00:13:50.822 fused_ordering(322) 00:13:50.822 fused_ordering(323) 00:13:50.822 fused_ordering(324) 00:13:50.822 fused_ordering(325) 00:13:50.822 fused_ordering(326) 00:13:50.822 fused_ordering(327) 00:13:50.822 fused_ordering(328) 00:13:50.822 fused_ordering(329) 00:13:50.822 fused_ordering(330) 00:13:50.822 fused_ordering(331) 00:13:50.822 fused_ordering(332) 00:13:50.822 fused_ordering(333) 00:13:50.822 fused_ordering(334) 00:13:50.822 fused_ordering(335) 00:13:50.822 fused_ordering(336) 00:13:50.822 fused_ordering(337) 00:13:50.822 fused_ordering(338) 00:13:50.822 fused_ordering(339) 00:13:50.822 fused_ordering(340) 00:13:50.822 fused_ordering(341) 00:13:50.822 fused_ordering(342) 00:13:50.822 fused_ordering(343) 00:13:50.822 fused_ordering(344) 00:13:50.822 fused_ordering(345) 00:13:50.822 fused_ordering(346) 00:13:50.822 fused_ordering(347) 00:13:50.822 fused_ordering(348) 00:13:50.822 fused_ordering(349) 00:13:50.822 fused_ordering(350) 00:13:50.822 fused_ordering(351) 00:13:50.822 fused_ordering(352) 00:13:50.822 fused_ordering(353) 00:13:50.822 fused_ordering(354) 00:13:50.822 fused_ordering(355) 00:13:50.822 fused_ordering(356) 00:13:50.822 fused_ordering(357) 00:13:50.822 fused_ordering(358) 00:13:50.822 fused_ordering(359) 00:13:50.822 fused_ordering(360) 00:13:50.822 fused_ordering(361) 00:13:50.822 fused_ordering(362) 00:13:50.822 fused_ordering(363) 00:13:50.822 fused_ordering(364) 00:13:50.822 fused_ordering(365) 00:13:50.822 fused_ordering(366) 00:13:50.822 fused_ordering(367) 00:13:50.822 fused_ordering(368) 00:13:50.822 fused_ordering(369) 00:13:50.822 fused_ordering(370) 00:13:50.822 fused_ordering(371) 00:13:50.822 fused_ordering(372) 00:13:50.822 fused_ordering(373) 00:13:50.822 fused_ordering(374) 00:13:50.822 fused_ordering(375) 00:13:50.822 fused_ordering(376) 00:13:50.822 fused_ordering(377) 00:13:50.822 fused_ordering(378) 00:13:50.822 fused_ordering(379) 00:13:50.822 fused_ordering(380) 00:13:50.822 fused_ordering(381) 00:13:50.822 fused_ordering(382) 00:13:50.822 fused_ordering(383) 00:13:50.822 fused_ordering(384) 00:13:50.822 fused_ordering(385) 00:13:50.822 fused_ordering(386) 00:13:50.822 fused_ordering(387) 00:13:50.822 fused_ordering(388) 00:13:50.822 fused_ordering(389) 00:13:50.822 fused_ordering(390) 00:13:50.822 fused_ordering(391) 00:13:50.822 fused_ordering(392) 00:13:50.822 fused_ordering(393) 00:13:50.822 fused_ordering(394) 00:13:50.822 fused_ordering(395) 00:13:50.822 fused_ordering(396) 00:13:50.822 fused_ordering(397) 00:13:50.822 fused_ordering(398) 00:13:50.822 fused_ordering(399) 00:13:50.822 fused_ordering(400) 00:13:50.822 fused_ordering(401) 00:13:50.822 fused_ordering(402) 00:13:50.822 fused_ordering(403) 00:13:50.822 fused_ordering(404) 00:13:50.822 fused_ordering(405) 00:13:50.822 fused_ordering(406) 00:13:50.822 fused_ordering(407) 00:13:50.822 fused_ordering(408) 00:13:50.822 fused_ordering(409) 00:13:50.822 fused_ordering(410) 00:13:51.084 fused_ordering(411) 00:13:51.084 fused_ordering(412) 00:13:51.084 fused_ordering(413) 00:13:51.084 fused_ordering(414) 00:13:51.084 fused_ordering(415) 00:13:51.084 fused_ordering(416) 00:13:51.084 fused_ordering(417) 00:13:51.084 fused_ordering(418) 00:13:51.084 fused_ordering(419) 00:13:51.084 fused_ordering(420) 00:13:51.084 fused_ordering(421) 00:13:51.084 fused_ordering(422) 00:13:51.084 fused_ordering(423) 00:13:51.084 fused_ordering(424) 00:13:51.084 fused_ordering(425) 00:13:51.084 fused_ordering(426) 00:13:51.084 fused_ordering(427) 00:13:51.084 fused_ordering(428) 00:13:51.084 fused_ordering(429) 00:13:51.084 fused_ordering(430) 00:13:51.084 fused_ordering(431) 00:13:51.084 fused_ordering(432) 00:13:51.084 fused_ordering(433) 00:13:51.084 fused_ordering(434) 00:13:51.084 fused_ordering(435) 00:13:51.084 fused_ordering(436) 00:13:51.084 fused_ordering(437) 00:13:51.084 fused_ordering(438) 00:13:51.084 fused_ordering(439) 00:13:51.084 fused_ordering(440) 00:13:51.084 fused_ordering(441) 00:13:51.084 fused_ordering(442) 00:13:51.084 fused_ordering(443) 00:13:51.084 fused_ordering(444) 00:13:51.084 fused_ordering(445) 00:13:51.084 fused_ordering(446) 00:13:51.084 fused_ordering(447) 00:13:51.084 fused_ordering(448) 00:13:51.084 fused_ordering(449) 00:13:51.084 fused_ordering(450) 00:13:51.084 fused_ordering(451) 00:13:51.084 fused_ordering(452) 00:13:51.084 fused_ordering(453) 00:13:51.084 fused_ordering(454) 00:13:51.084 fused_ordering(455) 00:13:51.084 fused_ordering(456) 00:13:51.084 fused_ordering(457) 00:13:51.084 fused_ordering(458) 00:13:51.084 fused_ordering(459) 00:13:51.084 fused_ordering(460) 00:13:51.084 fused_ordering(461) 00:13:51.084 fused_ordering(462) 00:13:51.084 fused_ordering(463) 00:13:51.084 fused_ordering(464) 00:13:51.084 fused_ordering(465) 00:13:51.084 fused_ordering(466) 00:13:51.084 fused_ordering(467) 00:13:51.084 fused_ordering(468) 00:13:51.084 fused_ordering(469) 00:13:51.084 fused_ordering(470) 00:13:51.084 fused_ordering(471) 00:13:51.084 fused_ordering(472) 00:13:51.084 fused_ordering(473) 00:13:51.084 fused_ordering(474) 00:13:51.084 fused_ordering(475) 00:13:51.084 fused_ordering(476) 00:13:51.084 fused_ordering(477) 00:13:51.084 fused_ordering(478) 00:13:51.084 fused_ordering(479) 00:13:51.084 fused_ordering(480) 00:13:51.084 fused_ordering(481) 00:13:51.084 fused_ordering(482) 00:13:51.084 fused_ordering(483) 00:13:51.084 fused_ordering(484) 00:13:51.084 fused_ordering(485) 00:13:51.084 fused_ordering(486) 00:13:51.084 fused_ordering(487) 00:13:51.084 fused_ordering(488) 00:13:51.084 fused_ordering(489) 00:13:51.084 fused_ordering(490) 00:13:51.084 fused_ordering(491) 00:13:51.084 fused_ordering(492) 00:13:51.084 fused_ordering(493) 00:13:51.084 fused_ordering(494) 00:13:51.084 fused_ordering(495) 00:13:51.084 fused_ordering(496) 00:13:51.084 fused_ordering(497) 00:13:51.084 fused_ordering(498) 00:13:51.084 fused_ordering(499) 00:13:51.084 fused_ordering(500) 00:13:51.084 fused_ordering(501) 00:13:51.084 fused_ordering(502) 00:13:51.084 fused_ordering(503) 00:13:51.084 fused_ordering(504) 00:13:51.084 fused_ordering(505) 00:13:51.084 fused_ordering(506) 00:13:51.084 fused_ordering(507) 00:13:51.084 fused_ordering(508) 00:13:51.084 fused_ordering(509) 00:13:51.084 fused_ordering(510) 00:13:51.084 fused_ordering(511) 00:13:51.084 fused_ordering(512) 00:13:51.084 fused_ordering(513) 00:13:51.084 fused_ordering(514) 00:13:51.084 fused_ordering(515) 00:13:51.084 fused_ordering(516) 00:13:51.084 fused_ordering(517) 00:13:51.084 fused_ordering(518) 00:13:51.084 fused_ordering(519) 00:13:51.084 fused_ordering(520) 00:13:51.084 fused_ordering(521) 00:13:51.084 fused_ordering(522) 00:13:51.084 fused_ordering(523) 00:13:51.084 fused_ordering(524) 00:13:51.084 fused_ordering(525) 00:13:51.084 fused_ordering(526) 00:13:51.084 fused_ordering(527) 00:13:51.084 fused_ordering(528) 00:13:51.084 fused_ordering(529) 00:13:51.084 fused_ordering(530) 00:13:51.084 fused_ordering(531) 00:13:51.084 fused_ordering(532) 00:13:51.084 fused_ordering(533) 00:13:51.084 fused_ordering(534) 00:13:51.084 fused_ordering(535) 00:13:51.084 fused_ordering(536) 00:13:51.084 fused_ordering(537) 00:13:51.084 fused_ordering(538) 00:13:51.084 fused_ordering(539) 00:13:51.084 fused_ordering(540) 00:13:51.084 fused_ordering(541) 00:13:51.084 fused_ordering(542) 00:13:51.084 fused_ordering(543) 00:13:51.084 fused_ordering(544) 00:13:51.084 fused_ordering(545) 00:13:51.084 fused_ordering(546) 00:13:51.084 fused_ordering(547) 00:13:51.084 fused_ordering(548) 00:13:51.084 fused_ordering(549) 00:13:51.084 fused_ordering(550) 00:13:51.084 fused_ordering(551) 00:13:51.084 fused_ordering(552) 00:13:51.084 fused_ordering(553) 00:13:51.084 fused_ordering(554) 00:13:51.084 fused_ordering(555) 00:13:51.084 fused_ordering(556) 00:13:51.084 fused_ordering(557) 00:13:51.084 fused_ordering(558) 00:13:51.084 fused_ordering(559) 00:13:51.084 fused_ordering(560) 00:13:51.084 fused_ordering(561) 00:13:51.084 fused_ordering(562) 00:13:51.084 fused_ordering(563) 00:13:51.084 fused_ordering(564) 00:13:51.084 fused_ordering(565) 00:13:51.084 fused_ordering(566) 00:13:51.084 fused_ordering(567) 00:13:51.084 fused_ordering(568) 00:13:51.084 fused_ordering(569) 00:13:51.084 fused_ordering(570) 00:13:51.084 fused_ordering(571) 00:13:51.084 fused_ordering(572) 00:13:51.084 fused_ordering(573) 00:13:51.084 fused_ordering(574) 00:13:51.084 fused_ordering(575) 00:13:51.084 fused_ordering(576) 00:13:51.084 fused_ordering(577) 00:13:51.084 fused_ordering(578) 00:13:51.084 fused_ordering(579) 00:13:51.084 fused_ordering(580) 00:13:51.084 fused_ordering(581) 00:13:51.084 fused_ordering(582) 00:13:51.084 fused_ordering(583) 00:13:51.084 fused_ordering(584) 00:13:51.084 fused_ordering(585) 00:13:51.084 fused_ordering(586) 00:13:51.084 fused_ordering(587) 00:13:51.084 fused_ordering(588) 00:13:51.084 fused_ordering(589) 00:13:51.084 fused_ordering(590) 00:13:51.084 fused_ordering(591) 00:13:51.084 fused_ordering(592) 00:13:51.084 fused_ordering(593) 00:13:51.084 fused_ordering(594) 00:13:51.084 fused_ordering(595) 00:13:51.084 fused_ordering(596) 00:13:51.084 fused_ordering(597) 00:13:51.084 fused_ordering(598) 00:13:51.084 fused_ordering(599) 00:13:51.084 fused_ordering(600) 00:13:51.084 fused_ordering(601) 00:13:51.084 fused_ordering(602) 00:13:51.084 fused_ordering(603) 00:13:51.084 fused_ordering(604) 00:13:51.084 fused_ordering(605) 00:13:51.084 fused_ordering(606) 00:13:51.084 fused_ordering(607) 00:13:51.084 fused_ordering(608) 00:13:51.084 fused_ordering(609) 00:13:51.084 fused_ordering(610) 00:13:51.084 fused_ordering(611) 00:13:51.084 fused_ordering(612) 00:13:51.084 fused_ordering(613) 00:13:51.084 fused_ordering(614) 00:13:51.084 fused_ordering(615) 00:13:51.656 fused_ordering(616) 00:13:51.656 fused_ordering(617) 00:13:51.656 fused_ordering(618) 00:13:51.656 fused_ordering(619) 00:13:51.656 fused_ordering(620) 00:13:51.656 fused_ordering(621) 00:13:51.656 fused_ordering(622) 00:13:51.656 fused_ordering(623) 00:13:51.656 fused_ordering(624) 00:13:51.656 fused_ordering(625) 00:13:51.656 fused_ordering(626) 00:13:51.656 fused_ordering(627) 00:13:51.656 fused_ordering(628) 00:13:51.656 fused_ordering(629) 00:13:51.656 fused_ordering(630) 00:13:51.656 fused_ordering(631) 00:13:51.656 fused_ordering(632) 00:13:51.656 fused_ordering(633) 00:13:51.656 fused_ordering(634) 00:13:51.656 fused_ordering(635) 00:13:51.656 fused_ordering(636) 00:13:51.656 fused_ordering(637) 00:13:51.656 fused_ordering(638) 00:13:51.656 fused_ordering(639) 00:13:51.656 fused_ordering(640) 00:13:51.656 fused_ordering(641) 00:13:51.656 fused_ordering(642) 00:13:51.656 fused_ordering(643) 00:13:51.656 fused_ordering(644) 00:13:51.656 fused_ordering(645) 00:13:51.656 fused_ordering(646) 00:13:51.656 fused_ordering(647) 00:13:51.656 fused_ordering(648) 00:13:51.656 fused_ordering(649) 00:13:51.656 fused_ordering(650) 00:13:51.656 fused_ordering(651) 00:13:51.656 fused_ordering(652) 00:13:51.656 fused_ordering(653) 00:13:51.656 fused_ordering(654) 00:13:51.656 fused_ordering(655) 00:13:51.656 fused_ordering(656) 00:13:51.656 fused_ordering(657) 00:13:51.656 fused_ordering(658) 00:13:51.656 fused_ordering(659) 00:13:51.656 fused_ordering(660) 00:13:51.656 fused_ordering(661) 00:13:51.656 fused_ordering(662) 00:13:51.656 fused_ordering(663) 00:13:51.656 fused_ordering(664) 00:13:51.656 fused_ordering(665) 00:13:51.656 fused_ordering(666) 00:13:51.656 fused_ordering(667) 00:13:51.656 fused_ordering(668) 00:13:51.656 fused_ordering(669) 00:13:51.656 fused_ordering(670) 00:13:51.656 fused_ordering(671) 00:13:51.656 fused_ordering(672) 00:13:51.656 fused_ordering(673) 00:13:51.656 fused_ordering(674) 00:13:51.656 fused_ordering(675) 00:13:51.656 fused_ordering(676) 00:13:51.656 fused_ordering(677) 00:13:51.656 fused_ordering(678) 00:13:51.656 fused_ordering(679) 00:13:51.656 fused_ordering(680) 00:13:51.656 fused_ordering(681) 00:13:51.656 fused_ordering(682) 00:13:51.656 fused_ordering(683) 00:13:51.656 fused_ordering(684) 00:13:51.656 fused_ordering(685) 00:13:51.656 fused_ordering(686) 00:13:51.656 fused_ordering(687) 00:13:51.656 fused_ordering(688) 00:13:51.656 fused_ordering(689) 00:13:51.656 fused_ordering(690) 00:13:51.656 fused_ordering(691) 00:13:51.656 fused_ordering(692) 00:13:51.656 fused_ordering(693) 00:13:51.656 fused_ordering(694) 00:13:51.656 fused_ordering(695) 00:13:51.656 fused_ordering(696) 00:13:51.656 fused_ordering(697) 00:13:51.656 fused_ordering(698) 00:13:51.656 fused_ordering(699) 00:13:51.656 fused_ordering(700) 00:13:51.656 fused_ordering(701) 00:13:51.656 fused_ordering(702) 00:13:51.656 fused_ordering(703) 00:13:51.656 fused_ordering(704) 00:13:51.656 fused_ordering(705) 00:13:51.656 fused_ordering(706) 00:13:51.656 fused_ordering(707) 00:13:51.656 fused_ordering(708) 00:13:51.656 fused_ordering(709) 00:13:51.656 fused_ordering(710) 00:13:51.656 fused_ordering(711) 00:13:51.656 fused_ordering(712) 00:13:51.656 fused_ordering(713) 00:13:51.656 fused_ordering(714) 00:13:51.656 fused_ordering(715) 00:13:51.656 fused_ordering(716) 00:13:51.656 fused_ordering(717) 00:13:51.656 fused_ordering(718) 00:13:51.656 fused_ordering(719) 00:13:51.656 fused_ordering(720) 00:13:51.656 fused_ordering(721) 00:13:51.656 fused_ordering(722) 00:13:51.656 fused_ordering(723) 00:13:51.656 fused_ordering(724) 00:13:51.656 fused_ordering(725) 00:13:51.656 fused_ordering(726) 00:13:51.656 fused_ordering(727) 00:13:51.656 fused_ordering(728) 00:13:51.656 fused_ordering(729) 00:13:51.656 fused_ordering(730) 00:13:51.656 fused_ordering(731) 00:13:51.656 fused_ordering(732) 00:13:51.656 fused_ordering(733) 00:13:51.656 fused_ordering(734) 00:13:51.656 fused_ordering(735) 00:13:51.656 fused_ordering(736) 00:13:51.656 fused_ordering(737) 00:13:51.656 fused_ordering(738) 00:13:51.656 fused_ordering(739) 00:13:51.656 fused_ordering(740) 00:13:51.656 fused_ordering(741) 00:13:51.657 fused_ordering(742) 00:13:51.657 fused_ordering(743) 00:13:51.657 fused_ordering(744) 00:13:51.657 fused_ordering(745) 00:13:51.657 fused_ordering(746) 00:13:51.657 fused_ordering(747) 00:13:51.657 fused_ordering(748) 00:13:51.657 fused_ordering(749) 00:13:51.657 fused_ordering(750) 00:13:51.657 fused_ordering(751) 00:13:51.657 fused_ordering(752) 00:13:51.657 fused_ordering(753) 00:13:51.657 fused_ordering(754) 00:13:51.657 fused_ordering(755) 00:13:51.657 fused_ordering(756) 00:13:51.657 fused_ordering(757) 00:13:51.657 fused_ordering(758) 00:13:51.657 fused_ordering(759) 00:13:51.657 fused_ordering(760) 00:13:51.657 fused_ordering(761) 00:13:51.657 fused_ordering(762) 00:13:51.657 fused_ordering(763) 00:13:51.657 fused_ordering(764) 00:13:51.657 fused_ordering(765) 00:13:51.657 fused_ordering(766) 00:13:51.657 fused_ordering(767) 00:13:51.657 fused_ordering(768) 00:13:51.657 fused_ordering(769) 00:13:51.657 fused_ordering(770) 00:13:51.657 fused_ordering(771) 00:13:51.657 fused_ordering(772) 00:13:51.657 fused_ordering(773) 00:13:51.657 fused_ordering(774) 00:13:51.657 fused_ordering(775) 00:13:51.657 fused_ordering(776) 00:13:51.657 fused_ordering(777) 00:13:51.657 fused_ordering(778) 00:13:51.657 fused_ordering(779) 00:13:51.657 fused_ordering(780) 00:13:51.657 fused_ordering(781) 00:13:51.657 fused_ordering(782) 00:13:51.657 fused_ordering(783) 00:13:51.657 fused_ordering(784) 00:13:51.657 fused_ordering(785) 00:13:51.657 fused_ordering(786) 00:13:51.657 fused_ordering(787) 00:13:51.657 fused_ordering(788) 00:13:51.657 fused_ordering(789) 00:13:51.657 fused_ordering(790) 00:13:51.657 fused_ordering(791) 00:13:51.657 fused_ordering(792) 00:13:51.657 fused_ordering(793) 00:13:51.657 fused_ordering(794) 00:13:51.657 fused_ordering(795) 00:13:51.657 fused_ordering(796) 00:13:51.657 fused_ordering(797) 00:13:51.657 fused_ordering(798) 00:13:51.657 fused_ordering(799) 00:13:51.657 fused_ordering(800) 00:13:51.657 fused_ordering(801) 00:13:51.657 fused_ordering(802) 00:13:51.657 fused_ordering(803) 00:13:51.657 fused_ordering(804) 00:13:51.657 fused_ordering(805) 00:13:51.657 fused_ordering(806) 00:13:51.657 fused_ordering(807) 00:13:51.657 fused_ordering(808) 00:13:51.657 fused_ordering(809) 00:13:51.657 fused_ordering(810) 00:13:51.657 fused_ordering(811) 00:13:51.657 fused_ordering(812) 00:13:51.657 fused_ordering(813) 00:13:51.657 fused_ordering(814) 00:13:51.657 fused_ordering(815) 00:13:51.657 fused_ordering(816) 00:13:51.657 fused_ordering(817) 00:13:51.657 fused_ordering(818) 00:13:51.657 fused_ordering(819) 00:13:51.657 fused_ordering(820) 00:13:52.228 fused_ordering(821) 00:13:52.228 fused_ordering(822) 00:13:52.228 fused_ordering(823) 00:13:52.228 fused_ordering(824) 00:13:52.228 fused_ordering(825) 00:13:52.228 fused_ordering(826) 00:13:52.228 fused_ordering(827) 00:13:52.228 fused_ordering(828) 00:13:52.228 fused_ordering(829) 00:13:52.228 fused_ordering(830) 00:13:52.228 fused_ordering(831) 00:13:52.228 fused_ordering(832) 00:13:52.228 fused_ordering(833) 00:13:52.228 fused_ordering(834) 00:13:52.228 fused_ordering(835) 00:13:52.228 fused_ordering(836) 00:13:52.228 fused_ordering(837) 00:13:52.228 fused_ordering(838) 00:13:52.228 fused_ordering(839) 00:13:52.228 fused_ordering(840) 00:13:52.228 fused_ordering(841) 00:13:52.228 fused_ordering(842) 00:13:52.228 fused_ordering(843) 00:13:52.228 fused_ordering(844) 00:13:52.228 fused_ordering(845) 00:13:52.228 fused_ordering(846) 00:13:52.228 fused_ordering(847) 00:13:52.228 fused_ordering(848) 00:13:52.228 fused_ordering(849) 00:13:52.228 fused_ordering(850) 00:13:52.228 fused_ordering(851) 00:13:52.228 fused_ordering(852) 00:13:52.228 fused_ordering(853) 00:13:52.228 fused_ordering(854) 00:13:52.228 fused_ordering(855) 00:13:52.228 fused_ordering(856) 00:13:52.228 fused_ordering(857) 00:13:52.228 fused_ordering(858) 00:13:52.228 fused_ordering(859) 00:13:52.228 fused_ordering(860) 00:13:52.228 fused_ordering(861) 00:13:52.228 fused_ordering(862) 00:13:52.228 fused_ordering(863) 00:13:52.228 fused_ordering(864) 00:13:52.228 fused_ordering(865) 00:13:52.228 fused_ordering(866) 00:13:52.228 fused_ordering(867) 00:13:52.228 fused_ordering(868) 00:13:52.228 fused_ordering(869) 00:13:52.228 fused_ordering(870) 00:13:52.228 fused_ordering(871) 00:13:52.228 fused_ordering(872) 00:13:52.228 fused_ordering(873) 00:13:52.228 fused_ordering(874) 00:13:52.228 fused_ordering(875) 00:13:52.228 fused_ordering(876) 00:13:52.228 fused_ordering(877) 00:13:52.228 fused_ordering(878) 00:13:52.228 fused_ordering(879) 00:13:52.228 fused_ordering(880) 00:13:52.228 fused_ordering(881) 00:13:52.228 fused_ordering(882) 00:13:52.228 fused_ordering(883) 00:13:52.228 fused_ordering(884) 00:13:52.228 fused_ordering(885) 00:13:52.228 fused_ordering(886) 00:13:52.228 fused_ordering(887) 00:13:52.228 fused_ordering(888) 00:13:52.229 fused_ordering(889) 00:13:52.229 fused_ordering(890) 00:13:52.229 fused_ordering(891) 00:13:52.229 fused_ordering(892) 00:13:52.229 fused_ordering(893) 00:13:52.229 fused_ordering(894) 00:13:52.229 fused_ordering(895) 00:13:52.229 fused_ordering(896) 00:13:52.229 fused_ordering(897) 00:13:52.229 fused_ordering(898) 00:13:52.229 fused_ordering(899) 00:13:52.229 fused_ordering(900) 00:13:52.229 fused_ordering(901) 00:13:52.229 fused_ordering(902) 00:13:52.229 fused_ordering(903) 00:13:52.229 fused_ordering(904) 00:13:52.229 fused_ordering(905) 00:13:52.229 fused_ordering(906) 00:13:52.229 fused_ordering(907) 00:13:52.229 fused_ordering(908) 00:13:52.229 fused_ordering(909) 00:13:52.229 fused_ordering(910) 00:13:52.229 fused_ordering(911) 00:13:52.229 fused_ordering(912) 00:13:52.229 fused_ordering(913) 00:13:52.229 fused_ordering(914) 00:13:52.229 fused_ordering(915) 00:13:52.229 fused_ordering(916) 00:13:52.229 fused_ordering(917) 00:13:52.229 fused_ordering(918) 00:13:52.229 fused_ordering(919) 00:13:52.229 fused_ordering(920) 00:13:52.229 fused_ordering(921) 00:13:52.229 fused_ordering(922) 00:13:52.229 fused_ordering(923) 00:13:52.229 fused_ordering(924) 00:13:52.229 fused_ordering(925) 00:13:52.229 fused_ordering(926) 00:13:52.229 fused_ordering(927) 00:13:52.229 fused_ordering(928) 00:13:52.229 fused_ordering(929) 00:13:52.229 fused_ordering(930) 00:13:52.229 fused_ordering(931) 00:13:52.229 fused_ordering(932) 00:13:52.229 fused_ordering(933) 00:13:52.229 fused_ordering(934) 00:13:52.229 fused_ordering(935) 00:13:52.229 fused_ordering(936) 00:13:52.229 fused_ordering(937) 00:13:52.229 fused_ordering(938) 00:13:52.229 fused_ordering(939) 00:13:52.229 fused_ordering(940) 00:13:52.229 fused_ordering(941) 00:13:52.229 fused_ordering(942) 00:13:52.229 fused_ordering(943) 00:13:52.229 fused_ordering(944) 00:13:52.229 fused_ordering(945) 00:13:52.229 fused_ordering(946) 00:13:52.229 fused_ordering(947) 00:13:52.229 fused_ordering(948) 00:13:52.229 fused_ordering(949) 00:13:52.229 fused_ordering(950) 00:13:52.229 fused_ordering(951) 00:13:52.229 fused_ordering(952) 00:13:52.229 fused_ordering(953) 00:13:52.229 fused_ordering(954) 00:13:52.229 fused_ordering(955) 00:13:52.229 fused_ordering(956) 00:13:52.229 fused_ordering(957) 00:13:52.229 fused_ordering(958) 00:13:52.229 fused_ordering(959) 00:13:52.229 fused_ordering(960) 00:13:52.229 fused_ordering(961) 00:13:52.229 fused_ordering(962) 00:13:52.229 fused_ordering(963) 00:13:52.229 fused_ordering(964) 00:13:52.229 fused_ordering(965) 00:13:52.229 fused_ordering(966) 00:13:52.229 fused_ordering(967) 00:13:52.229 fused_ordering(968) 00:13:52.229 fused_ordering(969) 00:13:52.229 fused_ordering(970) 00:13:52.229 fused_ordering(971) 00:13:52.229 fused_ordering(972) 00:13:52.229 fused_ordering(973) 00:13:52.229 fused_ordering(974) 00:13:52.229 fused_ordering(975) 00:13:52.229 fused_ordering(976) 00:13:52.229 fused_ordering(977) 00:13:52.229 fused_ordering(978) 00:13:52.229 fused_ordering(979) 00:13:52.229 fused_ordering(980) 00:13:52.229 fused_ordering(981) 00:13:52.229 fused_ordering(982) 00:13:52.229 fused_ordering(983) 00:13:52.229 fused_ordering(984) 00:13:52.229 fused_ordering(985) 00:13:52.229 fused_ordering(986) 00:13:52.229 fused_ordering(987) 00:13:52.229 fused_ordering(988) 00:13:52.229 fused_ordering(989) 00:13:52.229 fused_ordering(990) 00:13:52.229 fused_ordering(991) 00:13:52.229 fused_ordering(992) 00:13:52.229 fused_ordering(993) 00:13:52.229 fused_ordering(994) 00:13:52.229 fused_ordering(995) 00:13:52.229 fused_ordering(996) 00:13:52.229 fused_ordering(997) 00:13:52.229 fused_ordering(998) 00:13:52.229 fused_ordering(999) 00:13:52.229 fused_ordering(1000) 00:13:52.229 fused_ordering(1001) 00:13:52.229 fused_ordering(1002) 00:13:52.229 fused_ordering(1003) 00:13:52.229 fused_ordering(1004) 00:13:52.229 fused_ordering(1005) 00:13:52.229 fused_ordering(1006) 00:13:52.229 fused_ordering(1007) 00:13:52.229 fused_ordering(1008) 00:13:52.229 fused_ordering(1009) 00:13:52.229 fused_ordering(1010) 00:13:52.229 fused_ordering(1011) 00:13:52.229 fused_ordering(1012) 00:13:52.229 fused_ordering(1013) 00:13:52.229 fused_ordering(1014) 00:13:52.229 fused_ordering(1015) 00:13:52.229 fused_ordering(1016) 00:13:52.229 fused_ordering(1017) 00:13:52.229 fused_ordering(1018) 00:13:52.229 fused_ordering(1019) 00:13:52.229 fused_ordering(1020) 00:13:52.229 fused_ordering(1021) 00:13:52.229 fused_ordering(1022) 00:13:52.229 fused_ordering(1023) 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.229 rmmod nvme_tcp 00:13:52.229 rmmod nvme_fabrics 00:13:52.229 rmmod nvme_keyring 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 575234 ']' 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 575234 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 575234 ']' 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 575234 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.229 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 575234 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 575234' 00:13:52.491 killing process with pid 575234 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 575234 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 575234 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.491 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.039 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:55.039 00:13:55.039 real 0m12.882s 00:13:55.039 user 0m6.955s 00:13:55.039 sys 0m6.689s 00:13:55.039 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:55.039 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:55.039 ************************************ 00:13:55.039 END TEST nvmf_fused_ordering 00:13:55.039 ************************************ 00:13:55.039 13:38:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:55.039 13:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:55.039 13:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:55.039 13:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:55.039 ************************************ 00:13:55.039 START TEST nvmf_ns_masking 00:13:55.039 ************************************ 00:13:55.039 13:38:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:55.039 * Looking for test storage... 00:13:55.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:55.039 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:55.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.040 --rc genhtml_branch_coverage=1 00:13:55.040 --rc genhtml_function_coverage=1 00:13:55.040 --rc genhtml_legend=1 00:13:55.040 --rc geninfo_all_blocks=1 00:13:55.040 --rc geninfo_unexecuted_blocks=1 00:13:55.040 00:13:55.040 ' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:55.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.040 --rc genhtml_branch_coverage=1 00:13:55.040 --rc genhtml_function_coverage=1 00:13:55.040 --rc genhtml_legend=1 00:13:55.040 --rc geninfo_all_blocks=1 00:13:55.040 --rc geninfo_unexecuted_blocks=1 00:13:55.040 00:13:55.040 ' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:55.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.040 --rc genhtml_branch_coverage=1 00:13:55.040 --rc genhtml_function_coverage=1 00:13:55.040 --rc genhtml_legend=1 00:13:55.040 --rc geninfo_all_blocks=1 00:13:55.040 --rc geninfo_unexecuted_blocks=1 00:13:55.040 00:13:55.040 ' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:55.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.040 --rc genhtml_branch_coverage=1 00:13:55.040 --rc genhtml_function_coverage=1 00:13:55.040 --rc genhtml_legend=1 00:13:55.040 --rc geninfo_all_blocks=1 00:13:55.040 --rc geninfo_unexecuted_blocks=1 00:13:55.040 00:13:55.040 ' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:55.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8e6cdbb6-f3bd-4571-adb0-a3a43fbcc7e4 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=622f5a94-08ed-46eb-9bf6-5644f34b7b4d 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=868485c0-dc64-4d6d-b4da-38787a83a46b 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:55.040 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:55.041 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:03.187 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:03.187 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:03.187 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:03.188 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:03.188 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:03.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:14:03.188 00:14:03.188 --- 10.0.0.2 ping statistics --- 00:14:03.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.188 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:14:03.188 00:14:03.188 --- 10.0.0.1 ping statistics --- 00:14:03.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.188 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=580252 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 580252 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 580252 ']' 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:03.188 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.188 [2024-11-06 13:38:25.560131] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:14:03.188 [2024-11-06 13:38:25.560190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.188 [2024-11-06 13:38:25.639159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.188 [2024-11-06 13:38:25.673976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.188 [2024-11-06 13:38:25.674008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.188 [2024-11-06 13:38:25.674017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.188 [2024-11-06 13:38:25.674024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.188 [2024-11-06 13:38:25.674031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.188 [2024-11-06 13:38:25.674579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:03.188 [2024-11-06 13:38:26.507354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:03.188 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:03.449 Malloc1 00:14:03.449 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:03.711 Malloc2 00:14:03.711 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:03.711 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:03.973 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.233 [2024-11-06 13:38:27.379287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.233 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:04.233 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 868485c0-dc64-4d6d-b4da-38787a83a46b -a 10.0.0.2 -s 4420 -i 4 00:14:04.233 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.233 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:04.233 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.233 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:04.233 13:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.778 [ 0]:0x1 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af6b7cf4f8eb475cb7ccb814fc25f662 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af6b7cf4f8eb475cb7ccb814fc25f662 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.778 [ 0]:0x1 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af6b7cf4f8eb475cb7ccb814fc25f662 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af6b7cf4f8eb475cb7ccb814fc25f662 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.778 [ 1]:0x2 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e5b14d22d2a4394ad710289a8b4ef2a 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e5b14d22d2a4394ad710289a8b4ef2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:06.778 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.040 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.040 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:07.302 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:07.302 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 868485c0-dc64-4d6d-b4da-38787a83a46b -a 10.0.0.2 -s 4420 -i 4 00:14:07.563 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:07.563 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:07.563 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.563 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:07.563 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:07.563 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:09.477 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:09.477 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:09.477 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.477 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:09.477 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.477 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:09.477 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:09.477 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:09.739 [ 0]:0x2 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e5b14d22d2a4394ad710289a8b4ef2a 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e5b14d22d2a4394ad710289a8b4ef2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.739 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.000 [ 0]:0x1 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af6b7cf4f8eb475cb7ccb814fc25f662 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af6b7cf4f8eb475cb7ccb814fc25f662 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.000 [ 1]:0x2 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e5b14d22d2a4394ad710289a8b4ef2a 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e5b14d22d2a4394ad710289a8b4ef2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.000 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.262 [ 0]:0x2 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e5b14d22d2a4394ad710289a8b4ef2a 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e5b14d22d2a4394ad710289a8b4ef2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.262 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:10.523 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:10.523 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 868485c0-dc64-4d6d-b4da-38787a83a46b -a 10.0.0.2 -s 4420 -i 4 00:14:10.783 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:10.783 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:10.783 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.783 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:10.783 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:10.783 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:12.696 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:12.696 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:12.696 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.696 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:12.696 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.696 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:12.696 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:12.696 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.957 [ 0]:0x1 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af6b7cf4f8eb475cb7ccb814fc25f662 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af6b7cf4f8eb475cb7ccb814fc25f662 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.957 [ 1]:0x2 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.957 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e5b14d22d2a4394ad710289a8b4ef2a 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e5b14d22d2a4394ad710289a8b4ef2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.216 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:13.477 [ 0]:0x2 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e5b14d22d2a4394ad710289a8b4ef2a 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e5b14d22d2a4394ad710289a8b4ef2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:13.477 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:13.738 [2024-11-06 13:38:36.862850] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:13.738 request: 00:14:13.738 { 00:14:13.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.738 "nsid": 2, 00:14:13.738 "host": "nqn.2016-06.io.spdk:host1", 00:14:13.738 "method": "nvmf_ns_remove_host", 00:14:13.738 "req_id": 1 00:14:13.738 } 00:14:13.738 Got JSON-RPC error response 00:14:13.738 response: 00:14:13.738 { 00:14:13.738 "code": -32602, 00:14:13.738 "message": "Invalid parameters" 00:14:13.738 } 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.738 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:13.738 [ 0]:0x2 00:14:13.738 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:13.738 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.738 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e5b14d22d2a4394ad710289a8b4ef2a 00:14:13.738 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e5b14d22d2a4394ad710289a8b4ef2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.738 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:13.738 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=582673 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 582673 /var/tmp/host.sock 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 582673 ']' 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:13.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:13.999 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:13.999 [2024-11-06 13:38:37.264067] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:14:13.999 [2024-11-06 13:38:37.264120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582673 ] 00:14:13.999 [2024-11-06 13:38:37.352663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.259 [2024-11-06 13:38:37.388520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.259 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:14.259 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:14.259 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.520 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.780 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8e6cdbb6-f3bd-4571-adb0-a3a43fbcc7e4 00:14:14.780 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:14.780 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8E6CDBB6F3BD4571ADB0A3A43FBCC7E4 -i 00:14:14.780 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 622f5a94-08ed-46eb-9bf6-5644f34b7b4d 00:14:14.780 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:14.780 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 622F5A9408ED46EB9BF65644F34B7B4D -i 00:14:15.041 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:15.041 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:15.360 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:15.360 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:15.621 nvme0n1 00:14:15.621 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:15.621 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:15.881 nvme1n2 00:14:15.881 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:15.881 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:15.881 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:15.881 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:15.881 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:16.142 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:16.142 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:16.142 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:16.142 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:16.142 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8e6cdbb6-f3bd-4571-adb0-a3a43fbcc7e4 == \8\e\6\c\d\b\b\6\-\f\3\b\d\-\4\5\7\1\-\a\d\b\0\-\a\3\a\4\3\f\b\c\c\7\e\4 ]] 00:14:16.142 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:16.142 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:16.142 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:16.409 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 622f5a94-08ed-46eb-9bf6-5644f34b7b4d == \6\2\2\f\5\a\9\4\-\0\8\e\d\-\4\6\e\b\-\9\b\f\6\-\5\6\4\4\f\3\4\b\7\b\4\d ]] 00:14:16.409 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.409 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8e6cdbb6-f3bd-4571-adb0-a3a43fbcc7e4 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8E6CDBB6F3BD4571ADB0A3A43FBCC7E4 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8E6CDBB6F3BD4571ADB0A3A43FBCC7E4 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.672 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.673 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:16.673 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8E6CDBB6F3BD4571ADB0A3A43FBCC7E4 00:14:16.933 [2024-11-06 13:38:40.091968] bdev.c:8273:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:16.933 [2024-11-06 13:38:40.092005] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:16.933 [2024-11-06 13:38:40.092014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.933 request: 00:14:16.933 { 00:14:16.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.933 "namespace": { 00:14:16.933 "bdev_name": "invalid", 00:14:16.933 "nsid": 1, 00:14:16.933 "nguid": "8E6CDBB6F3BD4571ADB0A3A43FBCC7E4", 00:14:16.933 "no_auto_visible": false 00:14:16.933 }, 00:14:16.933 "method": "nvmf_subsystem_add_ns", 00:14:16.933 "req_id": 1 00:14:16.933 } 00:14:16.933 Got JSON-RPC error response 00:14:16.933 response: 00:14:16.933 { 00:14:16.933 "code": -32602, 00:14:16.933 "message": "Invalid parameters" 00:14:16.933 } 00:14:16.933 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:16.933 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.933 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.933 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.933 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8e6cdbb6-f3bd-4571-adb0-a3a43fbcc7e4 00:14:16.933 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:16.933 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8E6CDBB6F3BD4571ADB0A3A43FBCC7E4 -i 00:14:16.933 13:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 582673 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 582673 ']' 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 582673 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 582673 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 582673' 00:14:19.477 killing process with pid 582673 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 582673 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 582673 00:14:19.477 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:19.738 rmmod nvme_tcp 00:14:19.738 rmmod nvme_fabrics 00:14:19.738 rmmod nvme_keyring 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 580252 ']' 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 580252 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 580252 ']' 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 580252 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:19.738 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 580252 00:14:19.738 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:19.738 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:19.738 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 580252' 00:14:19.738 killing process with pid 580252 00:14:19.738 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 580252 00:14:19.738 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 580252 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.999 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.912 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:21.912 00:14:21.912 real 0m27.348s 00:14:21.912 user 0m30.239s 00:14:21.912 sys 0m7.952s 00:14:21.912 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:21.912 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.912 ************************************ 00:14:21.912 END TEST nvmf_ns_masking 00:14:21.912 ************************************ 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.174 ************************************ 00:14:22.174 START TEST nvmf_nvme_cli 00:14:22.174 ************************************ 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:22.174 * Looking for test storage... 00:14:22.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.174 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:22.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.175 --rc genhtml_branch_coverage=1 00:14:22.175 --rc genhtml_function_coverage=1 00:14:22.175 --rc genhtml_legend=1 00:14:22.175 --rc geninfo_all_blocks=1 00:14:22.175 --rc geninfo_unexecuted_blocks=1 00:14:22.175 00:14:22.175 ' 00:14:22.175 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:22.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.175 --rc genhtml_branch_coverage=1 00:14:22.175 --rc genhtml_function_coverage=1 00:14:22.175 --rc genhtml_legend=1 00:14:22.175 --rc geninfo_all_blocks=1 00:14:22.175 --rc geninfo_unexecuted_blocks=1 00:14:22.175 00:14:22.175 ' 00:14:22.175 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:22.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.175 --rc genhtml_branch_coverage=1 00:14:22.175 --rc genhtml_function_coverage=1 00:14:22.175 --rc genhtml_legend=1 00:14:22.175 --rc geninfo_all_blocks=1 00:14:22.175 --rc geninfo_unexecuted_blocks=1 00:14:22.175 00:14:22.175 ' 00:14:22.175 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:22.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.175 --rc genhtml_branch_coverage=1 00:14:22.175 --rc genhtml_function_coverage=1 00:14:22.175 --rc genhtml_legend=1 00:14:22.175 --rc geninfo_all_blocks=1 00:14:22.175 --rc geninfo_unexecuted_blocks=1 00:14:22.175 00:14:22.175 ' 00:14:22.175 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.437 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:22.438 13:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:30.578 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:30.579 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:30.579 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:30.579 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:30.579 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:30.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:14:30.579 00:14:30.579 --- 10.0.0.2 ping statistics --- 00:14:30.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.579 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:14:30.579 00:14:30.579 --- 10.0.0.1 ping statistics --- 00:14:30.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.579 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=588049 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 588049 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 588049 ']' 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:30.579 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.579 [2024-11-06 13:38:53.007413] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:14:30.579 [2024-11-06 13:38:53.007480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.579 [2024-11-06 13:38:53.092909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.579 [2024-11-06 13:38:53.136415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.579 [2024-11-06 13:38:53.136455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.579 [2024-11-06 13:38:53.136463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.579 [2024-11-06 13:38:53.136471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.579 [2024-11-06 13:38:53.136477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.579 [2024-11-06 13:38:53.138105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.579 [2024-11-06 13:38:53.138222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.579 [2024-11-06 13:38:53.138380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.579 [2024-11-06 13:38:53.138382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.579 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.580 [2024-11-06 13:38:53.870002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.580 Malloc0 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.580 Malloc1 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.580 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.840 [2024-11-06 13:38:53.969460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.840 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:30.840 00:14:30.840 Discovery Log Number of Records 2, Generation counter 2 00:14:30.840 =====Discovery Log Entry 0====== 00:14:30.840 trtype: tcp 00:14:30.840 adrfam: ipv4 00:14:30.840 subtype: current discovery subsystem 00:14:30.840 treq: not required 00:14:30.840 portid: 0 00:14:30.840 trsvcid: 4420 00:14:30.840 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:30.840 traddr: 10.0.0.2 00:14:30.840 eflags: explicit discovery connections, duplicate discovery information 00:14:30.840 sectype: none 00:14:30.840 =====Discovery Log Entry 1====== 00:14:30.840 trtype: tcp 00:14:30.840 adrfam: ipv4 00:14:30.840 subtype: nvme subsystem 00:14:30.840 treq: not required 00:14:30.840 portid: 0 00:14:30.840 trsvcid: 4420 00:14:30.840 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:30.840 traddr: 10.0.0.2 00:14:30.840 eflags: none 00:14:30.840 sectype: none 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:30.840 13:38:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:32.753 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:32.753 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:32.753 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.753 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:32.753 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:32.753 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:34.665 /dev/nvme0n2 ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:34.665 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.666 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:34.666 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.666 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.666 rmmod nvme_tcp 00:14:34.666 rmmod nvme_fabrics 00:14:34.666 rmmod nvme_keyring 00:14:34.666 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 588049 ']' 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 588049 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 588049 ']' 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 588049 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 588049 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 588049' 00:14:34.926 killing process with pid 588049 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 588049 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 588049 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.926 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:37.473 00:14:37.473 real 0m14.988s 00:14:37.473 user 0m22.609s 00:14:37.473 sys 0m6.229s 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.473 ************************************ 00:14:37.473 END TEST nvmf_nvme_cli 00:14:37.473 ************************************ 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.473 ************************************ 00:14:37.473 START TEST nvmf_vfio_user 00:14:37.473 ************************************ 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:37.473 * Looking for test storage... 00:14:37.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.473 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:37.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.474 --rc genhtml_branch_coverage=1 00:14:37.474 --rc genhtml_function_coverage=1 00:14:37.474 --rc genhtml_legend=1 00:14:37.474 --rc geninfo_all_blocks=1 00:14:37.474 --rc geninfo_unexecuted_blocks=1 00:14:37.474 00:14:37.474 ' 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:37.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.474 --rc genhtml_branch_coverage=1 00:14:37.474 --rc genhtml_function_coverage=1 00:14:37.474 --rc genhtml_legend=1 00:14:37.474 --rc geninfo_all_blocks=1 00:14:37.474 --rc geninfo_unexecuted_blocks=1 00:14:37.474 00:14:37.474 ' 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:37.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.474 --rc genhtml_branch_coverage=1 00:14:37.474 --rc genhtml_function_coverage=1 00:14:37.474 --rc genhtml_legend=1 00:14:37.474 --rc geninfo_all_blocks=1 00:14:37.474 --rc geninfo_unexecuted_blocks=1 00:14:37.474 00:14:37.474 ' 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:37.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.474 --rc genhtml_branch_coverage=1 00:14:37.474 --rc genhtml_function_coverage=1 00:14:37.474 --rc genhtml_legend=1 00:14:37.474 --rc geninfo_all_blocks=1 00:14:37.474 --rc geninfo_unexecuted_blocks=1 00:14:37.474 00:14:37.474 ' 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.474 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=589681 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 589681' 00:14:37.475 Process pid: 589681 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 589681 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 589681 ']' 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:37.475 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:37.475 [2024-11-06 13:39:00.678322] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:14:37.475 [2024-11-06 13:39:00.678396] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.475 [2024-11-06 13:39:00.753666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:37.475 [2024-11-06 13:39:00.790574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.475 [2024-11-06 13:39:00.790603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.475 [2024-11-06 13:39:00.790610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.475 [2024-11-06 13:39:00.790617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.475 [2024-11-06 13:39:00.790623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.475 [2024-11-06 13:39:00.792007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.475 [2024-11-06 13:39:00.792024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.475 [2024-11-06 13:39:00.792157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.475 [2024-11-06 13:39:00.792158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.420 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:38.420 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:38.420 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:39.362 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:39.362 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:39.362 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:39.362 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:39.362 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:39.362 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:39.623 Malloc1 00:14:39.623 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:39.885 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:40.147 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:40.147 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:40.147 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:40.147 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:40.409 Malloc2 00:14:40.409 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:40.669 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:40.670 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:40.930 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:40.930 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:40.930 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:40.930 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:40.930 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:40.930 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:40.930 [2024-11-06 13:39:04.229515] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:14:40.931 [2024-11-06 13:39:04.229585] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590454 ] 00:14:40.931 [2024-11-06 13:39:04.283876] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:40.931 [2024-11-06 13:39:04.293057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.931 [2024-11-06 13:39:04.293080] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f86bc096000 00:14:40.931 [2024-11-06 13:39:04.294054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.931 [2024-11-06 13:39:04.295051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.931 [2024-11-06 13:39:04.296056] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.931 [2024-11-06 13:39:04.297070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.931 [2024-11-06 13:39:04.298072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.931 [2024-11-06 13:39:04.299077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.931 [2024-11-06 13:39:04.300081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.931 [2024-11-06 13:39:04.301083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.931 [2024-11-06 13:39:04.302098] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.931 [2024-11-06 13:39:04.302108] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f86bc08b000 00:14:40.931 [2024-11-06 13:39:04.303434] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:41.195 [2024-11-06 13:39:04.320349] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:41.195 [2024-11-06 13:39:04.320374] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:41.195 [2024-11-06 13:39:04.325219] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:41.195 [2024-11-06 13:39:04.325267] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:41.195 [2024-11-06 13:39:04.325353] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:41.195 [2024-11-06 13:39:04.325368] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:41.195 [2024-11-06 13:39:04.325374] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:41.195 [2024-11-06 13:39:04.326219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:41.195 [2024-11-06 13:39:04.326229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:41.195 [2024-11-06 13:39:04.326236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:41.195 [2024-11-06 13:39:04.327221] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:41.195 [2024-11-06 13:39:04.327229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:41.195 [2024-11-06 13:39:04.327237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:41.195 [2024-11-06 13:39:04.328221] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:41.195 [2024-11-06 13:39:04.328229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:41.195 [2024-11-06 13:39:04.329236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:41.195 [2024-11-06 13:39:04.329245] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:41.195 [2024-11-06 13:39:04.329250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:41.195 [2024-11-06 13:39:04.329257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:41.195 [2024-11-06 13:39:04.329365] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:41.195 [2024-11-06 13:39:04.329370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:41.195 [2024-11-06 13:39:04.329377] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:41.195 [2024-11-06 13:39:04.330243] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:41.195 [2024-11-06 13:39:04.331246] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:41.195 [2024-11-06 13:39:04.332255] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:41.195 [2024-11-06 13:39:04.333253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.195 [2024-11-06 13:39:04.333306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:41.195 [2024-11-06 13:39:04.334262] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:41.195 [2024-11-06 13:39:04.334270] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:41.195 [2024-11-06 13:39:04.334275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:41.195 [2024-11-06 13:39:04.334304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334318] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:41.195 [2024-11-06 13:39:04.334323] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:41.195 [2024-11-06 13:39:04.334326] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.195 [2024-11-06 13:39:04.334339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:41.195 [2024-11-06 13:39:04.334369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:41.195 [2024-11-06 13:39:04.334378] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:41.195 [2024-11-06 13:39:04.334382] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:41.195 [2024-11-06 13:39:04.334387] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:41.195 [2024-11-06 13:39:04.334392] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:41.195 [2024-11-06 13:39:04.334398] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:41.195 [2024-11-06 13:39:04.334403] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:41.195 [2024-11-06 13:39:04.334408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:41.195 [2024-11-06 13:39:04.334437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:41.195 [2024-11-06 13:39:04.334447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.195 [2024-11-06 13:39:04.334455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.195 [2024-11-06 13:39:04.334464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.195 [2024-11-06 13:39:04.334472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.195 [2024-11-06 13:39:04.334478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:41.195 [2024-11-06 13:39:04.334501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:41.195 [2024-11-06 13:39:04.334508] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:41.195 [2024-11-06 13:39:04.334514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:41.195 [2024-11-06 13:39:04.334547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:41.195 [2024-11-06 13:39:04.334611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334627] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:41.195 [2024-11-06 13:39:04.334631] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:41.195 [2024-11-06 13:39:04.334635] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.195 [2024-11-06 13:39:04.334641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:41.195 [2024-11-06 13:39:04.334657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:41.195 [2024-11-06 13:39:04.334666] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:41.195 [2024-11-06 13:39:04.334677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:41.195 [2024-11-06 13:39:04.334696] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:41.195 [2024-11-06 13:39:04.334700] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:41.195 [2024-11-06 13:39:04.334704] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.195 [2024-11-06 13:39:04.334710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:41.195 [2024-11-06 13:39:04.334728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:41.196 [2024-11-06 13:39:04.334740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:41.196 [2024-11-06 13:39:04.334752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:41.196 [2024-11-06 13:39:04.334759] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:41.196 [2024-11-06 13:39:04.334763] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:41.196 [2024-11-06 13:39:04.334767] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.196 [2024-11-06 13:39:04.334773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:41.196 [2024-11-06 13:39:04.334782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:41.196 [2024-11-06 13:39:04.334790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:41.196 [2024-11-06 13:39:04.334797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:41.196 [2024-11-06 13:39:04.334805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:41.196 [2024-11-06 13:39:04.334811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:41.196 [2024-11-06 13:39:04.334816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:41.196 [2024-11-06 13:39:04.334821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:41.196 [2024-11-06 13:39:04.334826] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:41.196 [2024-11-06 13:39:04.334831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:41.196 [2024-11-06 13:39:04.334836] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:41.196 [2024-11-06 13:39:04.334853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:41.196 [2024-11-06 13:39:04.334865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:41.196 [2024-11-06 13:39:04.334877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:41.196 [2024-11-06 13:39:04.334884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:41.196 [2024-11-06 13:39:04.334896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:41.196 [2024-11-06 13:39:04.334905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:41.196 [2024-11-06 13:39:04.334916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:41.196 [2024-11-06 13:39:04.334929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:41.196 [2024-11-06 13:39:04.334942] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:41.196 [2024-11-06 13:39:04.334946] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:41.196 [2024-11-06 13:39:04.334950] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:41.196 [2024-11-06 13:39:04.334954] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:41.196 [2024-11-06 13:39:04.334957] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:41.196 [2024-11-06 13:39:04.334963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:41.196 [2024-11-06 13:39:04.334971] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:41.196 [2024-11-06 13:39:04.334976] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:41.196 [2024-11-06 13:39:04.334979] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.196 [2024-11-06 13:39:04.334985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:41.196 [2024-11-06 13:39:04.334993] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:41.196 [2024-11-06 13:39:04.334997] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:41.196 [2024-11-06 13:39:04.335000] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.196 [2024-11-06 13:39:04.335006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:41.196 [2024-11-06 13:39:04.335014] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:41.196 [2024-11-06 13:39:04.335018] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:41.196 [2024-11-06 13:39:04.335022] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.196 [2024-11-06 13:39:04.335028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:41.196 [2024-11-06 13:39:04.335035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:41.196 [2024-11-06 13:39:04.335047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:41.196 [2024-11-06 13:39:04.335057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:41.196 [2024-11-06 13:39:04.335065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:41.196 ===================================================== 00:14:41.196 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:41.196 ===================================================== 00:14:41.196 Controller Capabilities/Features 00:14:41.196 ================================ 00:14:41.196 Vendor ID: 4e58 00:14:41.196 Subsystem Vendor ID: 4e58 00:14:41.196 Serial Number: SPDK1 00:14:41.196 Model Number: SPDK bdev Controller 00:14:41.196 Firmware Version: 25.01 00:14:41.196 Recommended Arb Burst: 6 00:14:41.196 IEEE OUI Identifier: 8d 6b 50 00:14:41.196 Multi-path I/O 00:14:41.196 May have multiple subsystem ports: Yes 00:14:41.196 May have multiple controllers: Yes 00:14:41.196 Associated with SR-IOV VF: No 00:14:41.196 Max Data Transfer Size: 131072 00:14:41.196 Max Number of Namespaces: 32 00:14:41.196 Max Number of I/O Queues: 127 00:14:41.196 NVMe Specification Version (VS): 1.3 00:14:41.196 NVMe Specification Version (Identify): 1.3 00:14:41.196 Maximum Queue Entries: 256 00:14:41.196 Contiguous Queues Required: Yes 00:14:41.196 Arbitration Mechanisms Supported 00:14:41.196 Weighted Round Robin: Not Supported 00:14:41.196 Vendor Specific: Not Supported 00:14:41.196 Reset Timeout: 15000 ms 00:14:41.196 Doorbell Stride: 4 bytes 00:14:41.196 NVM Subsystem Reset: Not Supported 00:14:41.196 Command Sets Supported 00:14:41.196 NVM Command Set: Supported 00:14:41.196 Boot Partition: Not Supported 00:14:41.196 Memory Page Size Minimum: 4096 bytes 00:14:41.196 Memory Page Size Maximum: 4096 bytes 00:14:41.196 Persistent Memory Region: Not Supported 00:14:41.196 Optional Asynchronous Events Supported 00:14:41.196 Namespace Attribute Notices: Supported 00:14:41.196 Firmware Activation Notices: Not Supported 00:14:41.196 ANA Change Notices: Not Supported 00:14:41.196 PLE Aggregate Log Change Notices: Not Supported 00:14:41.196 LBA Status Info Alert Notices: Not Supported 00:14:41.196 EGE Aggregate Log Change Notices: Not Supported 00:14:41.196 Normal NVM Subsystem Shutdown event: Not Supported 00:14:41.196 Zone Descriptor Change Notices: Not Supported 00:14:41.196 Discovery Log Change Notices: Not Supported 00:14:41.196 Controller Attributes 00:14:41.196 128-bit Host Identifier: Supported 00:14:41.196 Non-Operational Permissive Mode: Not Supported 00:14:41.196 NVM Sets: Not Supported 00:14:41.196 Read Recovery Levels: Not Supported 00:14:41.196 Endurance Groups: Not Supported 00:14:41.196 Predictable Latency Mode: Not Supported 00:14:41.196 Traffic Based Keep ALive: Not Supported 00:14:41.196 Namespace Granularity: Not Supported 00:14:41.196 SQ Associations: Not Supported 00:14:41.196 UUID List: Not Supported 00:14:41.196 Multi-Domain Subsystem: Not Supported 00:14:41.196 Fixed Capacity Management: Not Supported 00:14:41.196 Variable Capacity Management: Not Supported 00:14:41.196 Delete Endurance Group: Not Supported 00:14:41.196 Delete NVM Set: Not Supported 00:14:41.196 Extended LBA Formats Supported: Not Supported 00:14:41.196 Flexible Data Placement Supported: Not Supported 00:14:41.196 00:14:41.196 Controller Memory Buffer Support 00:14:41.196 ================================ 00:14:41.196 Supported: No 00:14:41.196 00:14:41.196 Persistent Memory Region Support 00:14:41.196 ================================ 00:14:41.196 Supported: No 00:14:41.196 00:14:41.196 Admin Command Set Attributes 00:14:41.196 ============================ 00:14:41.196 Security Send/Receive: Not Supported 00:14:41.196 Format NVM: Not Supported 00:14:41.196 Firmware Activate/Download: Not Supported 00:14:41.196 Namespace Management: Not Supported 00:14:41.196 Device Self-Test: Not Supported 00:14:41.196 Directives: Not Supported 00:14:41.196 NVMe-MI: Not Supported 00:14:41.196 Virtualization Management: Not Supported 00:14:41.196 Doorbell Buffer Config: Not Supported 00:14:41.197 Get LBA Status Capability: Not Supported 00:14:41.197 Command & Feature Lockdown Capability: Not Supported 00:14:41.197 Abort Command Limit: 4 00:14:41.197 Async Event Request Limit: 4 00:14:41.197 Number of Firmware Slots: N/A 00:14:41.197 Firmware Slot 1 Read-Only: N/A 00:14:41.197 Firmware Activation Without Reset: N/A 00:14:41.197 Multiple Update Detection Support: N/A 00:14:41.197 Firmware Update Granularity: No Information Provided 00:14:41.197 Per-Namespace SMART Log: No 00:14:41.197 Asymmetric Namespace Access Log Page: Not Supported 00:14:41.197 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:41.197 Command Effects Log Page: Supported 00:14:41.197 Get Log Page Extended Data: Supported 00:14:41.197 Telemetry Log Pages: Not Supported 00:14:41.197 Persistent Event Log Pages: Not Supported 00:14:41.197 Supported Log Pages Log Page: May Support 00:14:41.197 Commands Supported & Effects Log Page: Not Supported 00:14:41.197 Feature Identifiers & Effects Log Page:May Support 00:14:41.197 NVMe-MI Commands & Effects Log Page: May Support 00:14:41.197 Data Area 4 for Telemetry Log: Not Supported 00:14:41.197 Error Log Page Entries Supported: 128 00:14:41.197 Keep Alive: Supported 00:14:41.197 Keep Alive Granularity: 10000 ms 00:14:41.197 00:14:41.197 NVM Command Set Attributes 00:14:41.197 ========================== 00:14:41.197 Submission Queue Entry Size 00:14:41.197 Max: 64 00:14:41.197 Min: 64 00:14:41.197 Completion Queue Entry Size 00:14:41.197 Max: 16 00:14:41.197 Min: 16 00:14:41.197 Number of Namespaces: 32 00:14:41.197 Compare Command: Supported 00:14:41.197 Write Uncorrectable Command: Not Supported 00:14:41.197 Dataset Management Command: Supported 00:14:41.197 Write Zeroes Command: Supported 00:14:41.197 Set Features Save Field: Not Supported 00:14:41.197 Reservations: Not Supported 00:14:41.197 Timestamp: Not Supported 00:14:41.197 Copy: Supported 00:14:41.197 Volatile Write Cache: Present 00:14:41.197 Atomic Write Unit (Normal): 1 00:14:41.197 Atomic Write Unit (PFail): 1 00:14:41.197 Atomic Compare & Write Unit: 1 00:14:41.197 Fused Compare & Write: Supported 00:14:41.197 Scatter-Gather List 00:14:41.197 SGL Command Set: Supported (Dword aligned) 00:14:41.197 SGL Keyed: Not Supported 00:14:41.197 SGL Bit Bucket Descriptor: Not Supported 00:14:41.197 SGL Metadata Pointer: Not Supported 00:14:41.197 Oversized SGL: Not Supported 00:14:41.197 SGL Metadata Address: Not Supported 00:14:41.197 SGL Offset: Not Supported 00:14:41.197 Transport SGL Data Block: Not Supported 00:14:41.197 Replay Protected Memory Block: Not Supported 00:14:41.197 00:14:41.197 Firmware Slot Information 00:14:41.197 ========================= 00:14:41.197 Active slot: 1 00:14:41.197 Slot 1 Firmware Revision: 25.01 00:14:41.197 00:14:41.197 00:14:41.197 Commands Supported and Effects 00:14:41.197 ============================== 00:14:41.197 Admin Commands 00:14:41.197 -------------- 00:14:41.197 Get Log Page (02h): Supported 00:14:41.197 Identify (06h): Supported 00:14:41.197 Abort (08h): Supported 00:14:41.197 Set Features (09h): Supported 00:14:41.197 Get Features (0Ah): Supported 00:14:41.197 Asynchronous Event Request (0Ch): Supported 00:14:41.197 Keep Alive (18h): Supported 00:14:41.197 I/O Commands 00:14:41.197 ------------ 00:14:41.197 Flush (00h): Supported LBA-Change 00:14:41.197 Write (01h): Supported LBA-Change 00:14:41.197 Read (02h): Supported 00:14:41.197 Compare (05h): Supported 00:14:41.197 Write Zeroes (08h): Supported LBA-Change 00:14:41.197 Dataset Management (09h): Supported LBA-Change 00:14:41.197 Copy (19h): Supported LBA-Change 00:14:41.197 00:14:41.197 Error Log 00:14:41.197 ========= 00:14:41.197 00:14:41.197 Arbitration 00:14:41.197 =========== 00:14:41.197 Arbitration Burst: 1 00:14:41.197 00:14:41.197 Power Management 00:14:41.197 ================ 00:14:41.197 Number of Power States: 1 00:14:41.197 Current Power State: Power State #0 00:14:41.197 Power State #0: 00:14:41.197 Max Power: 0.00 W 00:14:41.197 Non-Operational State: Operational 00:14:41.197 Entry Latency: Not Reported 00:14:41.197 Exit Latency: Not Reported 00:14:41.197 Relative Read Throughput: 0 00:14:41.197 Relative Read Latency: 0 00:14:41.197 Relative Write Throughput: 0 00:14:41.197 Relative Write Latency: 0 00:14:41.197 Idle Power: Not Reported 00:14:41.197 Active Power: Not Reported 00:14:41.197 Non-Operational Permissive Mode: Not Supported 00:14:41.197 00:14:41.197 Health Information 00:14:41.197 ================== 00:14:41.197 Critical Warnings: 00:14:41.197 Available Spare Space: OK 00:14:41.197 Temperature: OK 00:14:41.197 Device Reliability: OK 00:14:41.197 Read Only: No 00:14:41.197 Volatile Memory Backup: OK 00:14:41.197 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:41.197 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:41.197 Available Spare: 0% 00:14:41.197 Available Sp[2024-11-06 13:39:04.335169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:41.197 [2024-11-06 13:39:04.335183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:41.197 [2024-11-06 13:39:04.335211] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:41.197 [2024-11-06 13:39:04.335222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.197 [2024-11-06 13:39:04.335229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.197 [2024-11-06 13:39:04.335235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.197 [2024-11-06 13:39:04.335242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.197 [2024-11-06 13:39:04.335272] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:41.197 [2024-11-06 13:39:04.335281] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:41.197 [2024-11-06 13:39:04.336271] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.197 [2024-11-06 13:39:04.336312] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:41.197 [2024-11-06 13:39:04.336318] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:41.197 [2024-11-06 13:39:04.337276] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:41.197 [2024-11-06 13:39:04.337288] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:41.197 [2024-11-06 13:39:04.337348] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:41.197 [2024-11-06 13:39:04.343754] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:41.197 are Threshold: 0% 00:14:41.197 Life Percentage Used: 0% 00:14:41.197 Data Units Read: 0 00:14:41.197 Data Units Written: 0 00:14:41.197 Host Read Commands: 0 00:14:41.197 Host Write Commands: 0 00:14:41.197 Controller Busy Time: 0 minutes 00:14:41.197 Power Cycles: 0 00:14:41.197 Power On Hours: 0 hours 00:14:41.197 Unsafe Shutdowns: 0 00:14:41.197 Unrecoverable Media Errors: 0 00:14:41.197 Lifetime Error Log Entries: 0 00:14:41.197 Warning Temperature Time: 0 minutes 00:14:41.197 Critical Temperature Time: 0 minutes 00:14:41.197 00:14:41.197 Number of Queues 00:14:41.197 ================ 00:14:41.197 Number of I/O Submission Queues: 127 00:14:41.197 Number of I/O Completion Queues: 127 00:14:41.197 00:14:41.197 Active Namespaces 00:14:41.197 ================= 00:14:41.197 Namespace ID:1 00:14:41.197 Error Recovery Timeout: Unlimited 00:14:41.197 Command Set Identifier: NVM (00h) 00:14:41.197 Deallocate: Supported 00:14:41.197 Deallocated/Unwritten Error: Not Supported 00:14:41.197 Deallocated Read Value: Unknown 00:14:41.197 Deallocate in Write Zeroes: Not Supported 00:14:41.197 Deallocated Guard Field: 0xFFFF 00:14:41.197 Flush: Supported 00:14:41.197 Reservation: Supported 00:14:41.197 Namespace Sharing Capabilities: Multiple Controllers 00:14:41.197 Size (in LBAs): 131072 (0GiB) 00:14:41.197 Capacity (in LBAs): 131072 (0GiB) 00:14:41.197 Utilization (in LBAs): 131072 (0GiB) 00:14:41.197 NGUID: 4916F134342749A7A0B9E302C746BD23 00:14:41.197 UUID: 4916f134-3427-49a7-a0b9-e302c746bd23 00:14:41.197 Thin Provisioning: Not Supported 00:14:41.197 Per-NS Atomic Units: Yes 00:14:41.197 Atomic Boundary Size (Normal): 0 00:14:41.197 Atomic Boundary Size (PFail): 0 00:14:41.197 Atomic Boundary Offset: 0 00:14:41.197 Maximum Single Source Range Length: 65535 00:14:41.197 Maximum Copy Length: 65535 00:14:41.197 Maximum Source Range Count: 1 00:14:41.198 NGUID/EUI64 Never Reused: No 00:14:41.198 Namespace Write Protected: No 00:14:41.198 Number of LBA Formats: 1 00:14:41.198 Current LBA Format: LBA Format #00 00:14:41.198 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:41.198 00:14:41.198 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:41.198 [2024-11-06 13:39:04.544446] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.485 Initializing NVMe Controllers 00:14:46.485 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.485 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:46.485 Initialization complete. Launching workers. 00:14:46.485 ======================================================== 00:14:46.485 Latency(us) 00:14:46.485 Device Information : IOPS MiB/s Average min max 00:14:46.485 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40012.63 156.30 3199.21 851.86 9751.32 00:14:46.485 ======================================================== 00:14:46.485 Total : 40012.63 156.30 3199.21 851.86 9751.32 00:14:46.485 00:14:46.485 [2024-11-06 13:39:09.562066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.485 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:46.485 [2024-11-06 13:39:09.755974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.775 Initializing NVMe Controllers 00:14:51.775 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.776 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:51.776 Initialization complete. Launching workers. 00:14:51.776 ======================================================== 00:14:51.776 Latency(us) 00:14:51.776 Device Information : IOPS MiB/s Average min max 00:14:51.776 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8006.26 7629.59 15962.19 00:14:51.776 ======================================================== 00:14:51.776 Total : 16000.00 62.50 8006.26 7629.59 15962.19 00:14:51.776 00:14:51.776 [2024-11-06 13:39:14.792306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.776 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:51.776 [2024-11-06 13:39:15.001200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.065 [2024-11-06 13:39:20.109076] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.065 Initializing NVMe Controllers 00:14:57.065 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.065 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.065 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:57.065 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:57.065 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:57.065 Initialization complete. Launching workers. 00:14:57.065 Starting thread on core 2 00:14:57.065 Starting thread on core 3 00:14:57.065 Starting thread on core 1 00:14:57.065 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:57.065 [2024-11-06 13:39:20.399146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.373 [2024-11-06 13:39:23.463123] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.373 Initializing NVMe Controllers 00:15:00.373 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.373 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.373 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:00.373 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:00.373 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:00.373 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:00.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:00.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:00.373 Initialization complete. Launching workers. 00:15:00.373 Starting thread on core 1 with urgent priority queue 00:15:00.373 Starting thread on core 2 with urgent priority queue 00:15:00.373 Starting thread on core 3 with urgent priority queue 00:15:00.373 Starting thread on core 0 with urgent priority queue 00:15:00.373 SPDK bdev Controller (SPDK1 ) core 0: 8809.00 IO/s 11.35 secs/100000 ios 00:15:00.373 SPDK bdev Controller (SPDK1 ) core 1: 11315.00 IO/s 8.84 secs/100000 ios 00:15:00.373 SPDK bdev Controller (SPDK1 ) core 2: 10315.67 IO/s 9.69 secs/100000 ios 00:15:00.373 SPDK bdev Controller (SPDK1 ) core 3: 13563.33 IO/s 7.37 secs/100000 ios 00:15:00.373 ======================================================== 00:15:00.373 00:15:00.373 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:00.634 [2024-11-06 13:39:23.762207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.634 Initializing NVMe Controllers 00:15:00.634 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.634 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.634 Namespace ID: 1 size: 0GB 00:15:00.634 Initialization complete. 00:15:00.634 INFO: using host memory buffer for IO 00:15:00.634 Hello world! 00:15:00.634 [2024-11-06 13:39:23.798431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.634 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:00.895 [2024-11-06 13:39:24.084163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.838 Initializing NVMe Controllers 00:15:01.838 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.838 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.838 Initialization complete. Launching workers. 00:15:01.838 submit (in ns) avg, min, max = 7297.0, 3974.2, 3999400.8 00:15:01.838 complete (in ns) avg, min, max = 17663.2, 2376.7, 4995260.0 00:15:01.838 00:15:01.838 Submit histogram 00:15:01.838 ================ 00:15:01.838 Range in us Cumulative Count 00:15:01.838 3.973 - 4.000: 1.8270% ( 345) 00:15:01.838 4.000 - 4.027: 9.3894% ( 1428) 00:15:01.838 4.027 - 4.053: 19.7003% ( 1947) 00:15:01.838 4.053 - 4.080: 31.1497% ( 2162) 00:15:01.838 4.080 - 4.107: 41.6671% ( 1986) 00:15:01.838 4.107 - 4.133: 54.3187% ( 2389) 00:15:01.838 4.133 - 4.160: 71.9059% ( 3321) 00:15:01.838 4.160 - 4.187: 86.8241% ( 2817) 00:15:01.838 4.187 - 4.213: 95.0379% ( 1551) 00:15:01.838 4.213 - 4.240: 98.2683% ( 610) 00:15:01.838 4.240 - 4.267: 99.2533% ( 186) 00:15:01.838 4.267 - 4.293: 99.5552% ( 57) 00:15:01.838 4.293 - 4.320: 99.6028% ( 9) 00:15:01.838 4.373 - 4.400: 99.6081% ( 1) 00:15:01.838 4.453 - 4.480: 99.6134% ( 1) 00:15:01.838 4.773 - 4.800: 99.6187% ( 1) 00:15:01.838 4.907 - 4.933: 99.6240% ( 1) 00:15:01.838 5.173 - 5.200: 99.6293% ( 1) 00:15:01.838 5.253 - 5.280: 99.6346% ( 1) 00:15:01.838 5.680 - 5.707: 99.6452% ( 2) 00:15:01.838 5.893 - 5.920: 99.6505% ( 1) 00:15:01.838 6.027 - 6.053: 99.6558% ( 1) 00:15:01.838 6.053 - 6.080: 99.6611% ( 1) 00:15:01.838 6.080 - 6.107: 99.6664% ( 1) 00:15:01.838 6.107 - 6.133: 99.6770% ( 2) 00:15:01.838 6.133 - 6.160: 99.6875% ( 2) 00:15:01.838 6.187 - 6.213: 99.6981% ( 2) 00:15:01.838 6.213 - 6.240: 99.7087% ( 2) 00:15:01.838 6.240 - 6.267: 99.7193% ( 2) 00:15:01.838 6.267 - 6.293: 99.7246% ( 1) 00:15:01.838 6.320 - 6.347: 99.7352% ( 2) 00:15:01.838 6.347 - 6.373: 99.7458% ( 2) 00:15:01.838 6.373 - 6.400: 99.7511% ( 1) 00:15:01.838 6.400 - 6.427: 99.7564% ( 1) 00:15:01.838 6.427 - 6.453: 99.7617% ( 1) 00:15:01.838 6.453 - 6.480: 99.7670% ( 1) 00:15:01.838 6.480 - 6.507: 99.7723% ( 1) 00:15:01.838 6.507 - 6.533: 99.7776% ( 1) 00:15:01.838 6.533 - 6.560: 99.7882% ( 2) 00:15:01.838 6.587 - 6.613: 99.7988% ( 2) 00:15:01.838 6.640 - 6.667: 99.8041% ( 1) 00:15:01.838 6.693 - 6.720: 99.8094% ( 1) 00:15:01.838 6.747 - 6.773: 99.8199% ( 2) 00:15:01.838 6.800 - 6.827: 99.8305% ( 2) 00:15:01.838 6.827 - 6.880: 99.8358% ( 1) 00:15:01.838 6.933 - 6.987: 99.8411% ( 1) 00:15:01.838 7.040 - 7.093: 99.8464% ( 1) 00:15:01.838 7.200 - 7.253: 99.8570% ( 2) 00:15:01.838 7.253 - 7.307: 99.8729% ( 3) 00:15:01.838 7.360 - 7.413: 99.8835% ( 2) 00:15:01.838 7.520 - 7.573: 99.8888% ( 1) 00:15:01.838 7.680 - 7.733: 99.8941% ( 1) 00:15:01.838 7.733 - 7.787: 99.9047% ( 2) 00:15:01.838 7.947 - 8.000: 99.9100% ( 1) 00:15:01.838 8.480 - 8.533: 99.9153% ( 1) 00:15:01.838 10.293 - 10.347: 99.9206% ( 1) 00:15:01.838 3986.773 - 4014.080: 100.0000% ( 15) 00:15:01.838 00:15:01.838 Complete histogram 00:15:01.838 ================== 00:15:01.838 Range in us Cumulative Count 00:15:01.838 2.373 - 2.387: 0.0530% ( 10) 00:15:01.838 2.387 - 2.400: 0.6408% ( 111) 00:15:01.838 2.400 - 2.413: 0.7308% ( 17) 00:15:01.838 2.413 - 2.427: 0.8208% ( 17) 00:15:01.838 2.427 - 2.440: 27.2732% ( 4995) 00:15:01.838 2.440 - 2.453: 56.4688% ( 5513) 00:15:01.838 2.453 - 2.467: 65.2651% ( 1661) 00:15:01.838 2.467 - 2.480: 75.0252% ( 1843) 00:15:01.838 2.480 - 2.493: 79.5795% ( 860) 00:15:01.838 2.493 - 2.507: 82.3439% ( 522) 00:15:01.838 2.507 - 2.520: 89.0536% ( 1267) 00:15:01.838 2.520 - 2.533: 94.4500% ( 1019) 00:15:01.838 2.533 - 2.547: 96.9655% ( 475) 00:15:01.838 2.547 - 2.560: 98.4483% ( 280) 00:15:01.838 2.560 - 2.573: 99.1474% ( 132) 00:15:01.838 2.573 - 2.587: 99.3698% ( 42) 00:15:01.838 2.587 - 2.600: 99.4069% ( 7) 00:15:01.838 2.613 - 2.627: 99.4122% ( 1) 00:15:01.838 2.653 - 2.667: 99.4175% ( 1) 00:15:01.838 4.240 - 4.267: 99.4228% ( 1) 00:15:01.838 4.293 - [2024-11-06 13:39:25.107539] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.838 4.320: 99.4281% ( 1) 00:15:01.838 4.480 - 4.507: 99.4334% ( 1) 00:15:01.838 4.613 - 4.640: 99.4439% ( 2) 00:15:01.838 4.773 - 4.800: 99.4598% ( 3) 00:15:01.838 4.800 - 4.827: 99.4757% ( 3) 00:15:01.838 4.827 - 4.853: 99.4863% ( 2) 00:15:01.838 4.853 - 4.880: 99.4916% ( 1) 00:15:01.838 4.880 - 4.907: 99.4969% ( 1) 00:15:01.838 4.907 - 4.933: 99.5022% ( 1) 00:15:01.838 4.960 - 4.987: 99.5075% ( 1) 00:15:01.838 4.987 - 5.013: 99.5128% ( 1) 00:15:01.838 5.067 - 5.093: 99.5181% ( 1) 00:15:01.838 5.093 - 5.120: 99.5340% ( 3) 00:15:01.838 5.120 - 5.147: 99.5393% ( 1) 00:15:01.838 5.147 - 5.173: 99.5552% ( 3) 00:15:01.838 5.413 - 5.440: 99.5605% ( 1) 00:15:01.838 5.467 - 5.493: 99.5657% ( 1) 00:15:01.838 5.600 - 5.627: 99.5710% ( 1) 00:15:01.838 5.627 - 5.653: 99.5763% ( 1) 00:15:01.838 5.653 - 5.680: 99.5816% ( 1) 00:15:01.838 5.920 - 5.947: 99.5869% ( 1) 00:15:01.838 6.213 - 6.240: 99.5922% ( 1) 00:15:01.838 6.373 - 6.400: 99.5975% ( 1) 00:15:01.838 6.507 - 6.533: 99.6028% ( 1) 00:15:01.838 9.760 - 9.813: 99.6081% ( 1) 00:15:01.838 10.400 - 10.453: 99.6134% ( 1) 00:15:01.838 16.107 - 16.213: 99.6187% ( 1) 00:15:01.838 2225.493 - 2239.147: 99.6240% ( 1) 00:15:01.838 3017.387 - 3031.040: 99.6293% ( 1) 00:15:01.838 3031.040 - 3044.693: 99.6346% ( 1) 00:15:01.838 3986.773 - 4014.080: 99.9841% ( 66) 00:15:01.838 4969.813 - 4997.120: 100.0000% ( 3) 00:15:01.838 00:15:01.838 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:01.838 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:01.839 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:01.839 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:01.839 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:02.100 [ 00:15:02.100 { 00:15:02.100 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:02.100 "subtype": "Discovery", 00:15:02.100 "listen_addresses": [], 00:15:02.100 "allow_any_host": true, 00:15:02.100 "hosts": [] 00:15:02.100 }, 00:15:02.100 { 00:15:02.100 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:02.100 "subtype": "NVMe", 00:15:02.100 "listen_addresses": [ 00:15:02.100 { 00:15:02.100 "trtype": "VFIOUSER", 00:15:02.100 "adrfam": "IPv4", 00:15:02.100 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:02.100 "trsvcid": "0" 00:15:02.100 } 00:15:02.100 ], 00:15:02.100 "allow_any_host": true, 00:15:02.100 "hosts": [], 00:15:02.100 "serial_number": "SPDK1", 00:15:02.100 "model_number": "SPDK bdev Controller", 00:15:02.100 "max_namespaces": 32, 00:15:02.100 "min_cntlid": 1, 00:15:02.100 "max_cntlid": 65519, 00:15:02.100 "namespaces": [ 00:15:02.100 { 00:15:02.100 "nsid": 1, 00:15:02.101 "bdev_name": "Malloc1", 00:15:02.101 "name": "Malloc1", 00:15:02.101 "nguid": "4916F134342749A7A0B9E302C746BD23", 00:15:02.101 "uuid": "4916f134-3427-49a7-a0b9-e302c746bd23" 00:15:02.101 } 00:15:02.101 ] 00:15:02.101 }, 00:15:02.101 { 00:15:02.101 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:02.101 "subtype": "NVMe", 00:15:02.101 "listen_addresses": [ 00:15:02.101 { 00:15:02.101 "trtype": "VFIOUSER", 00:15:02.101 "adrfam": "IPv4", 00:15:02.101 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:02.101 "trsvcid": "0" 00:15:02.101 } 00:15:02.101 ], 00:15:02.101 "allow_any_host": true, 00:15:02.101 "hosts": [], 00:15:02.101 "serial_number": "SPDK2", 00:15:02.101 "model_number": "SPDK bdev Controller", 00:15:02.101 "max_namespaces": 32, 00:15:02.101 "min_cntlid": 1, 00:15:02.101 "max_cntlid": 65519, 00:15:02.101 "namespaces": [ 00:15:02.101 { 00:15:02.101 "nsid": 1, 00:15:02.101 "bdev_name": "Malloc2", 00:15:02.101 "name": "Malloc2", 00:15:02.101 "nguid": "5FF32A0D271441CBAB84BCC2D3AD5036", 00:15:02.101 "uuid": "5ff32a0d-2714-41cb-ab84-bcc2d3ad5036" 00:15:02.101 } 00:15:02.101 ] 00:15:02.101 } 00:15:02.101 ] 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=594961 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:02.101 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:02.362 Malloc3 00:15:02.362 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:02.362 [2024-11-06 13:39:25.535538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.362 [2024-11-06 13:39:25.673484] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.362 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:02.362 Asynchronous Event Request test 00:15:02.362 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:02.362 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:02.362 Registering asynchronous event callbacks... 00:15:02.362 Starting namespace attribute notice tests for all controllers... 00:15:02.362 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:02.362 aer_cb - Changed Namespace 00:15:02.362 Cleaning up... 00:15:02.624 [ 00:15:02.624 { 00:15:02.624 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:02.624 "subtype": "Discovery", 00:15:02.624 "listen_addresses": [], 00:15:02.624 "allow_any_host": true, 00:15:02.624 "hosts": [] 00:15:02.624 }, 00:15:02.624 { 00:15:02.624 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:02.624 "subtype": "NVMe", 00:15:02.624 "listen_addresses": [ 00:15:02.624 { 00:15:02.624 "trtype": "VFIOUSER", 00:15:02.624 "adrfam": "IPv4", 00:15:02.624 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:02.624 "trsvcid": "0" 00:15:02.624 } 00:15:02.624 ], 00:15:02.624 "allow_any_host": true, 00:15:02.624 "hosts": [], 00:15:02.624 "serial_number": "SPDK1", 00:15:02.624 "model_number": "SPDK bdev Controller", 00:15:02.624 "max_namespaces": 32, 00:15:02.624 "min_cntlid": 1, 00:15:02.624 "max_cntlid": 65519, 00:15:02.624 "namespaces": [ 00:15:02.624 { 00:15:02.624 "nsid": 1, 00:15:02.624 "bdev_name": "Malloc1", 00:15:02.624 "name": "Malloc1", 00:15:02.624 "nguid": "4916F134342749A7A0B9E302C746BD23", 00:15:02.624 "uuid": "4916f134-3427-49a7-a0b9-e302c746bd23" 00:15:02.624 }, 00:15:02.624 { 00:15:02.624 "nsid": 2, 00:15:02.624 "bdev_name": "Malloc3", 00:15:02.624 "name": "Malloc3", 00:15:02.624 "nguid": "D95600C3AD8C45EB951D999B6D899E5A", 00:15:02.624 "uuid": "d95600c3-ad8c-45eb-951d-999b6d899e5a" 00:15:02.624 } 00:15:02.624 ] 00:15:02.624 }, 00:15:02.624 { 00:15:02.624 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:02.624 "subtype": "NVMe", 00:15:02.624 "listen_addresses": [ 00:15:02.624 { 00:15:02.624 "trtype": "VFIOUSER", 00:15:02.624 "adrfam": "IPv4", 00:15:02.624 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:02.624 "trsvcid": "0" 00:15:02.624 } 00:15:02.624 ], 00:15:02.624 "allow_any_host": true, 00:15:02.624 "hosts": [], 00:15:02.624 "serial_number": "SPDK2", 00:15:02.624 "model_number": "SPDK bdev Controller", 00:15:02.624 "max_namespaces": 32, 00:15:02.624 "min_cntlid": 1, 00:15:02.624 "max_cntlid": 65519, 00:15:02.624 "namespaces": [ 00:15:02.624 { 00:15:02.624 "nsid": 1, 00:15:02.624 "bdev_name": "Malloc2", 00:15:02.624 "name": "Malloc2", 00:15:02.624 "nguid": "5FF32A0D271441CBAB84BCC2D3AD5036", 00:15:02.624 "uuid": "5ff32a0d-2714-41cb-ab84-bcc2d3ad5036" 00:15:02.624 } 00:15:02.624 ] 00:15:02.624 } 00:15:02.624 ] 00:15:02.624 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 594961 00:15:02.624 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:02.624 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:02.624 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:02.624 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:02.624 [2024-11-06 13:39:25.916927] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:15:02.624 [2024-11-06 13:39:25.916970] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595225 ] 00:15:02.624 [2024-11-06 13:39:25.972795] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:02.624 [2024-11-06 13:39:25.978942] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:02.624 [2024-11-06 13:39:25.978967] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f69152fe000 00:15:02.624 [2024-11-06 13:39:25.979946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.624 [2024-11-06 13:39:25.980950] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.624 [2024-11-06 13:39:25.981956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.624 [2024-11-06 13:39:25.982964] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.624 [2024-11-06 13:39:25.983976] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.624 [2024-11-06 13:39:25.984983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.624 [2024-11-06 13:39:25.985990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.624 [2024-11-06 13:39:25.986999] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.624 [2024-11-06 13:39:25.988007] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:02.624 [2024-11-06 13:39:25.988018] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f69152f3000 00:15:02.624 [2024-11-06 13:39:25.989342] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:02.888 [2024-11-06 13:39:26.008555] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:02.888 [2024-11-06 13:39:26.008586] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:02.888 [2024-11-06 13:39:26.010651] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:02.888 [2024-11-06 13:39:26.010696] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:02.888 [2024-11-06 13:39:26.010778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:02.888 [2024-11-06 13:39:26.010792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:02.888 [2024-11-06 13:39:26.010798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:02.888 [2024-11-06 13:39:26.011655] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:02.888 [2024-11-06 13:39:26.011664] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:02.888 [2024-11-06 13:39:26.011672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:02.888 [2024-11-06 13:39:26.012662] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:02.888 [2024-11-06 13:39:26.012672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:02.888 [2024-11-06 13:39:26.012683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:02.888 [2024-11-06 13:39:26.013669] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:02.888 [2024-11-06 13:39:26.013679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:02.889 [2024-11-06 13:39:26.014671] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:02.889 [2024-11-06 13:39:26.014680] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:02.889 [2024-11-06 13:39:26.014685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:02.889 [2024-11-06 13:39:26.014692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:02.889 [2024-11-06 13:39:26.014800] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:02.889 [2024-11-06 13:39:26.014805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:02.889 [2024-11-06 13:39:26.014810] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:02.889 [2024-11-06 13:39:26.015680] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:02.889 [2024-11-06 13:39:26.016686] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:02.889 [2024-11-06 13:39:26.017697] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:02.889 [2024-11-06 13:39:26.018695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:02.889 [2024-11-06 13:39:26.018734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:02.889 [2024-11-06 13:39:26.019703] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:02.889 [2024-11-06 13:39:26.019711] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:02.889 [2024-11-06 13:39:26.019717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.019738] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:02.889 [2024-11-06 13:39:26.019749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.019761] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.889 [2024-11-06 13:39:26.019766] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.889 [2024-11-06 13:39:26.019770] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.889 [2024-11-06 13:39:26.019781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.889 [2024-11-06 13:39:26.027763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:02.889 [2024-11-06 13:39:26.027778] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:02.889 [2024-11-06 13:39:26.027783] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:02.889 [2024-11-06 13:39:26.027788] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:02.889 [2024-11-06 13:39:26.027793] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:02.889 [2024-11-06 13:39:26.027800] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:02.889 [2024-11-06 13:39:26.027804] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:02.889 [2024-11-06 13:39:26.027809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.027819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.027829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:02.889 [2024-11-06 13:39:26.035751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:02.889 [2024-11-06 13:39:26.035764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.889 [2024-11-06 13:39:26.035772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.889 [2024-11-06 13:39:26.035781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.889 [2024-11-06 13:39:26.035789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.889 [2024-11-06 13:39:26.035794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.035801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.035810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:02.889 [2024-11-06 13:39:26.043752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:02.889 [2024-11-06 13:39:26.043763] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:02.889 [2024-11-06 13:39:26.043768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.043775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.043781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.043789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:02.889 [2024-11-06 13:39:26.051753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:02.889 [2024-11-06 13:39:26.051818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.051829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.051836] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:02.889 [2024-11-06 13:39:26.051841] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:02.889 [2024-11-06 13:39:26.051845] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.889 [2024-11-06 13:39:26.051851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:02.889 [2024-11-06 13:39:26.059752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:02.889 [2024-11-06 13:39:26.059763] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:02.889 [2024-11-06 13:39:26.059774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.059782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.059789] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.889 [2024-11-06 13:39:26.059793] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.889 [2024-11-06 13:39:26.059796] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.889 [2024-11-06 13:39:26.059803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.889 [2024-11-06 13:39:26.066789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:02.889 [2024-11-06 13:39:26.066805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.066813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.066821] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.889 [2024-11-06 13:39:26.066825] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.889 [2024-11-06 13:39:26.066829] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.889 [2024-11-06 13:39:26.066835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.889 [2024-11-06 13:39:26.075751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:02.889 [2024-11-06 13:39:26.075769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.075776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.075785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.075790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.075795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.075803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.075808] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:02.889 [2024-11-06 13:39:26.075812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:02.889 [2024-11-06 13:39:26.075817] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:02.889 [2024-11-06 13:39:26.075834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:02.889 [2024-11-06 13:39:26.083751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:02.889 [2024-11-06 13:39:26.083765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:02.890 [2024-11-06 13:39:26.091750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:02.890 [2024-11-06 13:39:26.091763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:02.890 [2024-11-06 13:39:26.099751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:02.890 [2024-11-06 13:39:26.099765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:02.890 [2024-11-06 13:39:26.107752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:02.890 [2024-11-06 13:39:26.107768] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:02.890 [2024-11-06 13:39:26.107773] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:02.890 [2024-11-06 13:39:26.107777] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:02.890 [2024-11-06 13:39:26.107780] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:02.890 [2024-11-06 13:39:26.107784] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:02.890 [2024-11-06 13:39:26.107790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:02.890 [2024-11-06 13:39:26.107798] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:02.890 [2024-11-06 13:39:26.107802] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:02.890 [2024-11-06 13:39:26.107806] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.890 [2024-11-06 13:39:26.107812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:02.890 [2024-11-06 13:39:26.107819] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:02.890 [2024-11-06 13:39:26.107823] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.890 [2024-11-06 13:39:26.107827] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.890 [2024-11-06 13:39:26.107833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.890 [2024-11-06 13:39:26.107841] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:02.890 [2024-11-06 13:39:26.107847] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:02.890 [2024-11-06 13:39:26.107851] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.890 [2024-11-06 13:39:26.107857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:02.890 [2024-11-06 13:39:26.115750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:02.890 [2024-11-06 13:39:26.115765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:02.890 [2024-11-06 13:39:26.115775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:02.890 [2024-11-06 13:39:26.115782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:02.890 ===================================================== 00:15:02.890 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:02.890 ===================================================== 00:15:02.890 Controller Capabilities/Features 00:15:02.890 ================================ 00:15:02.890 Vendor ID: 4e58 00:15:02.890 Subsystem Vendor ID: 4e58 00:15:02.890 Serial Number: SPDK2 00:15:02.890 Model Number: SPDK bdev Controller 00:15:02.890 Firmware Version: 25.01 00:15:02.890 Recommended Arb Burst: 6 00:15:02.890 IEEE OUI Identifier: 8d 6b 50 00:15:02.890 Multi-path I/O 00:15:02.890 May have multiple subsystem ports: Yes 00:15:02.890 May have multiple controllers: Yes 00:15:02.890 Associated with SR-IOV VF: No 00:15:02.890 Max Data Transfer Size: 131072 00:15:02.890 Max Number of Namespaces: 32 00:15:02.890 Max Number of I/O Queues: 127 00:15:02.890 NVMe Specification Version (VS): 1.3 00:15:02.890 NVMe Specification Version (Identify): 1.3 00:15:02.890 Maximum Queue Entries: 256 00:15:02.890 Contiguous Queues Required: Yes 00:15:02.890 Arbitration Mechanisms Supported 00:15:02.890 Weighted Round Robin: Not Supported 00:15:02.890 Vendor Specific: Not Supported 00:15:02.890 Reset Timeout: 15000 ms 00:15:02.890 Doorbell Stride: 4 bytes 00:15:02.890 NVM Subsystem Reset: Not Supported 00:15:02.890 Command Sets Supported 00:15:02.890 NVM Command Set: Supported 00:15:02.890 Boot Partition: Not Supported 00:15:02.890 Memory Page Size Minimum: 4096 bytes 00:15:02.890 Memory Page Size Maximum: 4096 bytes 00:15:02.890 Persistent Memory Region: Not Supported 00:15:02.890 Optional Asynchronous Events Supported 00:15:02.890 Namespace Attribute Notices: Supported 00:15:02.890 Firmware Activation Notices: Not Supported 00:15:02.890 ANA Change Notices: Not Supported 00:15:02.890 PLE Aggregate Log Change Notices: Not Supported 00:15:02.890 LBA Status Info Alert Notices: Not Supported 00:15:02.890 EGE Aggregate Log Change Notices: Not Supported 00:15:02.890 Normal NVM Subsystem Shutdown event: Not Supported 00:15:02.890 Zone Descriptor Change Notices: Not Supported 00:15:02.890 Discovery Log Change Notices: Not Supported 00:15:02.890 Controller Attributes 00:15:02.890 128-bit Host Identifier: Supported 00:15:02.890 Non-Operational Permissive Mode: Not Supported 00:15:02.890 NVM Sets: Not Supported 00:15:02.890 Read Recovery Levels: Not Supported 00:15:02.890 Endurance Groups: Not Supported 00:15:02.890 Predictable Latency Mode: Not Supported 00:15:02.890 Traffic Based Keep ALive: Not Supported 00:15:02.890 Namespace Granularity: Not Supported 00:15:02.890 SQ Associations: Not Supported 00:15:02.890 UUID List: Not Supported 00:15:02.890 Multi-Domain Subsystem: Not Supported 00:15:02.890 Fixed Capacity Management: Not Supported 00:15:02.890 Variable Capacity Management: Not Supported 00:15:02.890 Delete Endurance Group: Not Supported 00:15:02.890 Delete NVM Set: Not Supported 00:15:02.890 Extended LBA Formats Supported: Not Supported 00:15:02.890 Flexible Data Placement Supported: Not Supported 00:15:02.890 00:15:02.890 Controller Memory Buffer Support 00:15:02.890 ================================ 00:15:02.890 Supported: No 00:15:02.890 00:15:02.890 Persistent Memory Region Support 00:15:02.890 ================================ 00:15:02.890 Supported: No 00:15:02.890 00:15:02.890 Admin Command Set Attributes 00:15:02.890 ============================ 00:15:02.890 Security Send/Receive: Not Supported 00:15:02.890 Format NVM: Not Supported 00:15:02.890 Firmware Activate/Download: Not Supported 00:15:02.890 Namespace Management: Not Supported 00:15:02.890 Device Self-Test: Not Supported 00:15:02.890 Directives: Not Supported 00:15:02.890 NVMe-MI: Not Supported 00:15:02.890 Virtualization Management: Not Supported 00:15:02.890 Doorbell Buffer Config: Not Supported 00:15:02.890 Get LBA Status Capability: Not Supported 00:15:02.890 Command & Feature Lockdown Capability: Not Supported 00:15:02.890 Abort Command Limit: 4 00:15:02.890 Async Event Request Limit: 4 00:15:02.890 Number of Firmware Slots: N/A 00:15:02.890 Firmware Slot 1 Read-Only: N/A 00:15:02.890 Firmware Activation Without Reset: N/A 00:15:02.890 Multiple Update Detection Support: N/A 00:15:02.890 Firmware Update Granularity: No Information Provided 00:15:02.890 Per-Namespace SMART Log: No 00:15:02.890 Asymmetric Namespace Access Log Page: Not Supported 00:15:02.890 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:02.890 Command Effects Log Page: Supported 00:15:02.890 Get Log Page Extended Data: Supported 00:15:02.890 Telemetry Log Pages: Not Supported 00:15:02.890 Persistent Event Log Pages: Not Supported 00:15:02.890 Supported Log Pages Log Page: May Support 00:15:02.890 Commands Supported & Effects Log Page: Not Supported 00:15:02.890 Feature Identifiers & Effects Log Page:May Support 00:15:02.890 NVMe-MI Commands & Effects Log Page: May Support 00:15:02.890 Data Area 4 for Telemetry Log: Not Supported 00:15:02.890 Error Log Page Entries Supported: 128 00:15:02.890 Keep Alive: Supported 00:15:02.890 Keep Alive Granularity: 10000 ms 00:15:02.890 00:15:02.890 NVM Command Set Attributes 00:15:02.890 ========================== 00:15:02.890 Submission Queue Entry Size 00:15:02.890 Max: 64 00:15:02.890 Min: 64 00:15:02.890 Completion Queue Entry Size 00:15:02.890 Max: 16 00:15:02.890 Min: 16 00:15:02.890 Number of Namespaces: 32 00:15:02.890 Compare Command: Supported 00:15:02.890 Write Uncorrectable Command: Not Supported 00:15:02.890 Dataset Management Command: Supported 00:15:02.890 Write Zeroes Command: Supported 00:15:02.890 Set Features Save Field: Not Supported 00:15:02.890 Reservations: Not Supported 00:15:02.890 Timestamp: Not Supported 00:15:02.890 Copy: Supported 00:15:02.890 Volatile Write Cache: Present 00:15:02.890 Atomic Write Unit (Normal): 1 00:15:02.890 Atomic Write Unit (PFail): 1 00:15:02.890 Atomic Compare & Write Unit: 1 00:15:02.890 Fused Compare & Write: Supported 00:15:02.890 Scatter-Gather List 00:15:02.890 SGL Command Set: Supported (Dword aligned) 00:15:02.890 SGL Keyed: Not Supported 00:15:02.890 SGL Bit Bucket Descriptor: Not Supported 00:15:02.890 SGL Metadata Pointer: Not Supported 00:15:02.890 Oversized SGL: Not Supported 00:15:02.890 SGL Metadata Address: Not Supported 00:15:02.891 SGL Offset: Not Supported 00:15:02.891 Transport SGL Data Block: Not Supported 00:15:02.891 Replay Protected Memory Block: Not Supported 00:15:02.891 00:15:02.891 Firmware Slot Information 00:15:02.891 ========================= 00:15:02.891 Active slot: 1 00:15:02.891 Slot 1 Firmware Revision: 25.01 00:15:02.891 00:15:02.891 00:15:02.891 Commands Supported and Effects 00:15:02.891 ============================== 00:15:02.891 Admin Commands 00:15:02.891 -------------- 00:15:02.891 Get Log Page (02h): Supported 00:15:02.891 Identify (06h): Supported 00:15:02.891 Abort (08h): Supported 00:15:02.891 Set Features (09h): Supported 00:15:02.891 Get Features (0Ah): Supported 00:15:02.891 Asynchronous Event Request (0Ch): Supported 00:15:02.891 Keep Alive (18h): Supported 00:15:02.891 I/O Commands 00:15:02.891 ------------ 00:15:02.891 Flush (00h): Supported LBA-Change 00:15:02.891 Write (01h): Supported LBA-Change 00:15:02.891 Read (02h): Supported 00:15:02.891 Compare (05h): Supported 00:15:02.891 Write Zeroes (08h): Supported LBA-Change 00:15:02.891 Dataset Management (09h): Supported LBA-Change 00:15:02.891 Copy (19h): Supported LBA-Change 00:15:02.891 00:15:02.891 Error Log 00:15:02.891 ========= 00:15:02.891 00:15:02.891 Arbitration 00:15:02.891 =========== 00:15:02.891 Arbitration Burst: 1 00:15:02.891 00:15:02.891 Power Management 00:15:02.891 ================ 00:15:02.891 Number of Power States: 1 00:15:02.891 Current Power State: Power State #0 00:15:02.891 Power State #0: 00:15:02.891 Max Power: 0.00 W 00:15:02.891 Non-Operational State: Operational 00:15:02.891 Entry Latency: Not Reported 00:15:02.891 Exit Latency: Not Reported 00:15:02.891 Relative Read Throughput: 0 00:15:02.891 Relative Read Latency: 0 00:15:02.891 Relative Write Throughput: 0 00:15:02.891 Relative Write Latency: 0 00:15:02.891 Idle Power: Not Reported 00:15:02.891 Active Power: Not Reported 00:15:02.891 Non-Operational Permissive Mode: Not Supported 00:15:02.891 00:15:02.891 Health Information 00:15:02.891 ================== 00:15:02.891 Critical Warnings: 00:15:02.891 Available Spare Space: OK 00:15:02.891 Temperature: OK 00:15:02.891 Device Reliability: OK 00:15:02.891 Read Only: No 00:15:02.891 Volatile Memory Backup: OK 00:15:02.891 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:02.891 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:02.891 Available Spare: 0% 00:15:02.891 Available Sp[2024-11-06 13:39:26.115887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:02.891 [2024-11-06 13:39:26.123752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:02.891 [2024-11-06 13:39:26.123782] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:02.891 [2024-11-06 13:39:26.123792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.891 [2024-11-06 13:39:26.123799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.891 [2024-11-06 13:39:26.123805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.891 [2024-11-06 13:39:26.123812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.891 [2024-11-06 13:39:26.123864] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:02.891 [2024-11-06 13:39:26.123875] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:02.891 [2024-11-06 13:39:26.124863] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:02.891 [2024-11-06 13:39:26.124912] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:02.891 [2024-11-06 13:39:26.124919] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:02.891 [2024-11-06 13:39:26.125867] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:02.891 [2024-11-06 13:39:26.125879] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:02.891 [2024-11-06 13:39:26.125928] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:02.891 [2024-11-06 13:39:26.127304] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:02.891 are Threshold: 0% 00:15:02.891 Life Percentage Used: 0% 00:15:02.891 Data Units Read: 0 00:15:02.891 Data Units Written: 0 00:15:02.891 Host Read Commands: 0 00:15:02.891 Host Write Commands: 0 00:15:02.891 Controller Busy Time: 0 minutes 00:15:02.891 Power Cycles: 0 00:15:02.891 Power On Hours: 0 hours 00:15:02.891 Unsafe Shutdowns: 0 00:15:02.891 Unrecoverable Media Errors: 0 00:15:02.891 Lifetime Error Log Entries: 0 00:15:02.891 Warning Temperature Time: 0 minutes 00:15:02.891 Critical Temperature Time: 0 minutes 00:15:02.891 00:15:02.891 Number of Queues 00:15:02.891 ================ 00:15:02.891 Number of I/O Submission Queues: 127 00:15:02.891 Number of I/O Completion Queues: 127 00:15:02.891 00:15:02.891 Active Namespaces 00:15:02.891 ================= 00:15:02.891 Namespace ID:1 00:15:02.891 Error Recovery Timeout: Unlimited 00:15:02.891 Command Set Identifier: NVM (00h) 00:15:02.891 Deallocate: Supported 00:15:02.891 Deallocated/Unwritten Error: Not Supported 00:15:02.891 Deallocated Read Value: Unknown 00:15:02.891 Deallocate in Write Zeroes: Not Supported 00:15:02.891 Deallocated Guard Field: 0xFFFF 00:15:02.891 Flush: Supported 00:15:02.891 Reservation: Supported 00:15:02.891 Namespace Sharing Capabilities: Multiple Controllers 00:15:02.891 Size (in LBAs): 131072 (0GiB) 00:15:02.891 Capacity (in LBAs): 131072 (0GiB) 00:15:02.891 Utilization (in LBAs): 131072 (0GiB) 00:15:02.891 NGUID: 5FF32A0D271441CBAB84BCC2D3AD5036 00:15:02.891 UUID: 5ff32a0d-2714-41cb-ab84-bcc2d3ad5036 00:15:02.891 Thin Provisioning: Not Supported 00:15:02.891 Per-NS Atomic Units: Yes 00:15:02.891 Atomic Boundary Size (Normal): 0 00:15:02.891 Atomic Boundary Size (PFail): 0 00:15:02.891 Atomic Boundary Offset: 0 00:15:02.891 Maximum Single Source Range Length: 65535 00:15:02.891 Maximum Copy Length: 65535 00:15:02.891 Maximum Source Range Count: 1 00:15:02.891 NGUID/EUI64 Never Reused: No 00:15:02.891 Namespace Write Protected: No 00:15:02.891 Number of LBA Formats: 1 00:15:02.891 Current LBA Format: LBA Format #00 00:15:02.891 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:02.891 00:15:02.891 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:03.152 [2024-11-06 13:39:26.332582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.445 Initializing NVMe Controllers 00:15:08.445 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:08.445 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:08.445 Initialization complete. Launching workers. 00:15:08.445 ======================================================== 00:15:08.445 Latency(us) 00:15:08.445 Device Information : IOPS MiB/s Average min max 00:15:08.445 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40036.78 156.39 3197.75 845.34 9783.77 00:15:08.445 ======================================================== 00:15:08.445 Total : 40036.78 156.39 3197.75 845.34 9783.77 00:15:08.445 00:15:08.445 [2024-11-06 13:39:31.438944] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.445 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:08.445 [2024-11-06 13:39:31.631566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.737 Initializing NVMe Controllers 00:15:13.737 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:13.737 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:13.737 Initialization complete. Launching workers. 00:15:13.737 ======================================================== 00:15:13.737 Latency(us) 00:15:13.737 Device Information : IOPS MiB/s Average min max 00:15:13.737 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32836.78 128.27 3897.56 1152.95 9998.32 00:15:13.737 ======================================================== 00:15:13.737 Total : 32836.78 128.27 3897.56 1152.95 9998.32 00:15:13.737 00:15:13.737 [2024-11-06 13:39:36.650861] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.737 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:13.737 [2024-11-06 13:39:36.854067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.024 [2024-11-06 13:39:41.986825] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.024 Initializing NVMe Controllers 00:15:19.024 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.024 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.024 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:19.024 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:19.024 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:19.024 Initialization complete. Launching workers. 00:15:19.024 Starting thread on core 2 00:15:19.024 Starting thread on core 3 00:15:19.024 Starting thread on core 1 00:15:19.024 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:19.024 [2024-11-06 13:39:42.266347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.321 [2024-11-06 13:39:45.316143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.321 Initializing NVMe Controllers 00:15:22.321 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.321 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.321 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:22.321 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:22.321 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:22.321 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:22.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:22.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:22.321 Initialization complete. Launching workers. 00:15:22.321 Starting thread on core 1 with urgent priority queue 00:15:22.321 Starting thread on core 2 with urgent priority queue 00:15:22.321 Starting thread on core 3 with urgent priority queue 00:15:22.321 Starting thread on core 0 with urgent priority queue 00:15:22.321 SPDK bdev Controller (SPDK2 ) core 0: 9455.67 IO/s 10.58 secs/100000 ios 00:15:22.321 SPDK bdev Controller (SPDK2 ) core 1: 10671.00 IO/s 9.37 secs/100000 ios 00:15:22.321 SPDK bdev Controller (SPDK2 ) core 2: 8181.00 IO/s 12.22 secs/100000 ios 00:15:22.321 SPDK bdev Controller (SPDK2 ) core 3: 8877.67 IO/s 11.26 secs/100000 ios 00:15:22.321 ======================================================== 00:15:22.321 00:15:22.321 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:22.321 [2024-11-06 13:39:45.593752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.321 Initializing NVMe Controllers 00:15:22.321 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.321 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.321 Namespace ID: 1 size: 0GB 00:15:22.321 Initialization complete. 00:15:22.321 INFO: using host memory buffer for IO 00:15:22.321 Hello world! 00:15:22.321 [2024-11-06 13:39:45.603811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.321 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:22.581 [2024-11-06 13:39:45.888685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.965 Initializing NVMe Controllers 00:15:23.965 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.965 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.965 Initialization complete. Launching workers. 00:15:23.965 submit (in ns) avg, min, max = 7218.2, 3885.8, 4000180.8 00:15:23.965 complete (in ns) avg, min, max = 17233.8, 2386.7, 3999266.7 00:15:23.965 00:15:23.965 Submit histogram 00:15:23.965 ================ 00:15:23.965 Range in us Cumulative Count 00:15:23.965 3.867 - 3.893: 0.1108% ( 21) 00:15:23.965 3.893 - 3.920: 2.0216% ( 362) 00:15:23.965 3.920 - 3.947: 8.1235% ( 1156) 00:15:23.965 3.947 - 3.973: 17.5086% ( 1778) 00:15:23.965 3.973 - 4.000: 28.7939% ( 2138) 00:15:23.965 4.000 - 4.027: 39.0077% ( 1935) 00:15:23.965 4.027 - 4.053: 50.1029% ( 2102) 00:15:23.965 4.053 - 4.080: 65.6374% ( 2943) 00:15:23.965 4.080 - 4.107: 81.3091% ( 2969) 00:15:23.965 4.107 - 4.133: 91.8237% ( 1992) 00:15:23.965 4.133 - 4.160: 96.5479% ( 895) 00:15:23.965 4.160 - 4.187: 98.6223% ( 393) 00:15:23.965 4.187 - 4.213: 99.2452% ( 118) 00:15:23.965 4.213 - 4.240: 99.5091% ( 50) 00:15:23.965 4.240 - 4.267: 99.5619% ( 10) 00:15:23.965 4.267 - 4.293: 99.5672% ( 1) 00:15:23.965 4.560 - 4.587: 99.5724% ( 1) 00:15:23.965 4.800 - 4.827: 99.5777% ( 1) 00:15:23.965 4.853 - 4.880: 99.5830% ( 1) 00:15:23.965 5.040 - 5.067: 99.5883% ( 1) 00:15:23.965 5.120 - 5.147: 99.5936% ( 1) 00:15:23.965 5.227 - 5.253: 99.5988% ( 1) 00:15:23.965 5.253 - 5.280: 99.6041% ( 1) 00:15:23.965 5.360 - 5.387: 99.6094% ( 1) 00:15:23.965 5.893 - 5.920: 99.6200% ( 2) 00:15:23.965 5.947 - 5.973: 99.6252% ( 1) 00:15:23.965 6.027 - 6.053: 99.6305% ( 1) 00:15:23.965 6.053 - 6.080: 99.6358% ( 1) 00:15:23.965 6.080 - 6.107: 99.6411% ( 1) 00:15:23.966 6.213 - 6.240: 99.6516% ( 2) 00:15:23.966 6.240 - 6.267: 99.6569% ( 1) 00:15:23.966 6.347 - 6.373: 99.6622% ( 1) 00:15:23.966 6.373 - 6.400: 99.6675% ( 1) 00:15:23.966 6.427 - 6.453: 99.6780% ( 2) 00:15:23.966 6.507 - 6.533: 99.6833% ( 1) 00:15:23.966 6.533 - 6.560: 99.6886% ( 1) 00:15:23.966 6.560 - 6.587: 99.6939% ( 1) 00:15:23.966 6.613 - 6.640: 99.6991% ( 1) 00:15:23.966 6.720 - 6.747: 99.7202% ( 4) 00:15:23.966 6.747 - 6.773: 99.7308% ( 2) 00:15:23.966 6.773 - 6.800: 99.7414% ( 2) 00:15:23.966 6.800 - 6.827: 99.7466% ( 1) 00:15:23.966 6.827 - 6.880: 99.7519% ( 1) 00:15:23.966 6.880 - 6.933: 99.7572% ( 1) 00:15:23.966 6.933 - 6.987: 99.7625% ( 1) 00:15:23.966 6.987 - 7.040: 99.7677% ( 1) 00:15:23.966 7.040 - 7.093: 99.7783% ( 2) 00:15:23.966 7.093 - 7.147: 99.7836% ( 1) 00:15:23.966 7.147 - 7.200: 99.7889% ( 1) 00:15:23.966 7.200 - 7.253: 99.7941% ( 1) 00:15:23.966 7.253 - 7.307: 99.7994% ( 1) 00:15:23.966 7.307 - 7.360: 99.8047% ( 1) 00:15:23.966 7.360 - 7.413: 99.8205% ( 3) 00:15:23.966 7.413 - 7.467: 99.8364% ( 3) 00:15:23.966 7.467 - 7.520: 99.8416% ( 1) 00:15:23.966 7.573 - 7.627: 99.8469% ( 1) 00:15:23.966 7.627 - 7.680: 99.8575% ( 2) 00:15:23.966 7.733 - 7.787: 99.8628% ( 1) 00:15:23.966 7.787 - 7.840: 99.8733% ( 2) 00:15:23.966 7.840 - 7.893: 99.8786% ( 1) 00:15:23.966 7.893 - 7.947: 99.8839% ( 1) 00:15:23.966 8.107 - 8.160: 99.8892% ( 1) 00:15:23.966 8.213 - 8.267: 99.8997% ( 2) 00:15:23.966 8.587 - 8.640: 99.9050% ( 1) 00:15:23.966 9.120 - 9.173: 99.9103% ( 1) 00:15:23.966 10.667 - 10.720: 99.9155% ( 1) 00:15:23.966 13.493 - 13.547: 99.9208% ( 1) 00:15:23.966 3986.773 - 4014.080: 100.0000% ( 15) 00:15:23.966 00:15:23.966 Complete histogram 00:15:23.966 ================== 00:15:23.966 Range in us Cumulative Count 00:15:23.966 2.387 - 2.400: 0.5753% ( 109) 00:15:23.966 2.400 - 2.413: 0.7020% ( 24) 00:15:23.966 2.413 - 2.427: 0.8287% ( 24) 00:15:23.966 2.427 - 2.440: 0.9290% ( 19) 00:15:23.966 2.440 - 2.453: 0.9712% ( 8) 00:15:23.966 2.453 - 2.467: 48.0338% ( 8916) 00:15:23.966 2.467 - 2.480: 58.9390% ( 2066) 00:15:23.966 2.480 - 2.493: 71.3117% ( 2344) 00:15:23.966 2.493 - 2.507: 76.9543% ( 1069) 00:15:23.966 2.507 - [2024-11-06 13:39:46.984424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.966 2.520: 80.8446% ( 737) 00:15:23.966 2.520 - 2.533: 84.7400% ( 738) 00:15:23.966 2.533 - 2.547: 91.0372% ( 1193) 00:15:23.966 2.547 - 2.560: 95.5450% ( 854) 00:15:23.966 2.560 - 2.573: 97.5086% ( 372) 00:15:23.966 2.573 - 2.587: 98.5643% ( 200) 00:15:23.966 2.587 - 2.600: 99.1185% ( 105) 00:15:23.966 2.600 - 2.613: 99.3296% ( 40) 00:15:23.966 2.613 - 2.627: 99.3930% ( 12) 00:15:23.966 4.560 - 4.587: 99.3983% ( 1) 00:15:23.966 4.747 - 4.773: 99.4035% ( 1) 00:15:23.966 4.773 - 4.800: 99.4088% ( 1) 00:15:23.966 4.907 - 4.933: 99.4141% ( 1) 00:15:23.966 4.933 - 4.960: 99.4194% ( 1) 00:15:23.966 4.987 - 5.013: 99.4247% ( 1) 00:15:23.966 5.067 - 5.093: 99.4352% ( 2) 00:15:23.966 5.120 - 5.147: 99.4405% ( 1) 00:15:23.966 5.147 - 5.173: 99.4458% ( 1) 00:15:23.966 5.173 - 5.200: 99.4510% ( 1) 00:15:23.966 5.200 - 5.227: 99.4563% ( 1) 00:15:23.966 5.307 - 5.333: 99.4774% ( 4) 00:15:23.966 5.360 - 5.387: 99.4880% ( 2) 00:15:23.966 5.413 - 5.440: 99.4933% ( 1) 00:15:23.966 5.440 - 5.467: 99.5038% ( 2) 00:15:23.966 5.547 - 5.573: 99.5091% ( 1) 00:15:23.966 5.573 - 5.600: 99.5144% ( 1) 00:15:23.966 5.600 - 5.627: 99.5197% ( 1) 00:15:23.966 5.707 - 5.733: 99.5249% ( 1) 00:15:23.966 5.760 - 5.787: 99.5355% ( 2) 00:15:23.966 5.787 - 5.813: 99.5408% ( 1) 00:15:23.966 5.840 - 5.867: 99.5461% ( 1) 00:15:23.966 5.893 - 5.920: 99.5566% ( 2) 00:15:23.966 5.920 - 5.947: 99.5672% ( 2) 00:15:23.966 6.107 - 6.133: 99.5724% ( 1) 00:15:23.966 6.347 - 6.373: 99.5777% ( 1) 00:15:23.966 6.400 - 6.427: 99.5830% ( 1) 00:15:23.966 6.453 - 6.480: 99.5883% ( 1) 00:15:23.966 7.147 - 7.200: 99.5936% ( 1) 00:15:23.966 11.253 - 11.307: 99.5988% ( 1) 00:15:23.966 12.427 - 12.480: 99.6041% ( 1) 00:15:23.966 12.587 - 12.640: 99.6094% ( 1) 00:15:23.966 13.227 - 13.280: 99.6147% ( 1) 00:15:23.966 31.147 - 31.360: 99.6200% ( 1) 00:15:23.966 43.947 - 44.160: 99.6252% ( 1) 00:15:23.966 85.333 - 85.760: 99.6305% ( 1) 00:15:23.966 3467.947 - 3481.600: 99.6358% ( 1) 00:15:23.966 3986.773 - 4014.080: 100.0000% ( 69) 00:15:23.966 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.966 [ 00:15:23.966 { 00:15:23.966 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.966 "subtype": "Discovery", 00:15:23.966 "listen_addresses": [], 00:15:23.966 "allow_any_host": true, 00:15:23.966 "hosts": [] 00:15:23.966 }, 00:15:23.966 { 00:15:23.966 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.966 "subtype": "NVMe", 00:15:23.966 "listen_addresses": [ 00:15:23.966 { 00:15:23.966 "trtype": "VFIOUSER", 00:15:23.966 "adrfam": "IPv4", 00:15:23.966 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.966 "trsvcid": "0" 00:15:23.966 } 00:15:23.966 ], 00:15:23.966 "allow_any_host": true, 00:15:23.966 "hosts": [], 00:15:23.966 "serial_number": "SPDK1", 00:15:23.966 "model_number": "SPDK bdev Controller", 00:15:23.966 "max_namespaces": 32, 00:15:23.966 "min_cntlid": 1, 00:15:23.966 "max_cntlid": 65519, 00:15:23.966 "namespaces": [ 00:15:23.966 { 00:15:23.966 "nsid": 1, 00:15:23.966 "bdev_name": "Malloc1", 00:15:23.966 "name": "Malloc1", 00:15:23.966 "nguid": "4916F134342749A7A0B9E302C746BD23", 00:15:23.966 "uuid": "4916f134-3427-49a7-a0b9-e302c746bd23" 00:15:23.966 }, 00:15:23.966 { 00:15:23.966 "nsid": 2, 00:15:23.966 "bdev_name": "Malloc3", 00:15:23.966 "name": "Malloc3", 00:15:23.966 "nguid": "D95600C3AD8C45EB951D999B6D899E5A", 00:15:23.966 "uuid": "d95600c3-ad8c-45eb-951d-999b6d899e5a" 00:15:23.966 } 00:15:23.966 ] 00:15:23.966 }, 00:15:23.966 { 00:15:23.966 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.966 "subtype": "NVMe", 00:15:23.966 "listen_addresses": [ 00:15:23.966 { 00:15:23.966 "trtype": "VFIOUSER", 00:15:23.966 "adrfam": "IPv4", 00:15:23.966 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.966 "trsvcid": "0" 00:15:23.966 } 00:15:23.966 ], 00:15:23.966 "allow_any_host": true, 00:15:23.966 "hosts": [], 00:15:23.966 "serial_number": "SPDK2", 00:15:23.966 "model_number": "SPDK bdev Controller", 00:15:23.966 "max_namespaces": 32, 00:15:23.966 "min_cntlid": 1, 00:15:23.966 "max_cntlid": 65519, 00:15:23.966 "namespaces": [ 00:15:23.966 { 00:15:23.966 "nsid": 1, 00:15:23.966 "bdev_name": "Malloc2", 00:15:23.966 "name": "Malloc2", 00:15:23.966 "nguid": "5FF32A0D271441CBAB84BCC2D3AD5036", 00:15:23.966 "uuid": "5ff32a0d-2714-41cb-ab84-bcc2d3ad5036" 00:15:23.966 } 00:15:23.966 ] 00:15:23.966 } 00:15:23.966 ] 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=599303 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:23.966 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:24.228 Malloc4 00:15:24.228 [2024-11-06 13:39:47.400801] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.228 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:24.228 [2024-11-06 13:39:47.581047] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.489 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:24.489 Asynchronous Event Request test 00:15:24.489 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:24.489 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:24.489 Registering asynchronous event callbacks... 00:15:24.489 Starting namespace attribute notice tests for all controllers... 00:15:24.489 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:24.489 aer_cb - Changed Namespace 00:15:24.489 Cleaning up... 00:15:24.489 [ 00:15:24.489 { 00:15:24.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:24.489 "subtype": "Discovery", 00:15:24.489 "listen_addresses": [], 00:15:24.489 "allow_any_host": true, 00:15:24.489 "hosts": [] 00:15:24.489 }, 00:15:24.489 { 00:15:24.489 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:24.489 "subtype": "NVMe", 00:15:24.489 "listen_addresses": [ 00:15:24.489 { 00:15:24.489 "trtype": "VFIOUSER", 00:15:24.489 "adrfam": "IPv4", 00:15:24.489 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:24.489 "trsvcid": "0" 00:15:24.489 } 00:15:24.489 ], 00:15:24.489 "allow_any_host": true, 00:15:24.489 "hosts": [], 00:15:24.489 "serial_number": "SPDK1", 00:15:24.489 "model_number": "SPDK bdev Controller", 00:15:24.489 "max_namespaces": 32, 00:15:24.489 "min_cntlid": 1, 00:15:24.489 "max_cntlid": 65519, 00:15:24.489 "namespaces": [ 00:15:24.489 { 00:15:24.489 "nsid": 1, 00:15:24.489 "bdev_name": "Malloc1", 00:15:24.489 "name": "Malloc1", 00:15:24.489 "nguid": "4916F134342749A7A0B9E302C746BD23", 00:15:24.489 "uuid": "4916f134-3427-49a7-a0b9-e302c746bd23" 00:15:24.489 }, 00:15:24.489 { 00:15:24.489 "nsid": 2, 00:15:24.489 "bdev_name": "Malloc3", 00:15:24.489 "name": "Malloc3", 00:15:24.489 "nguid": "D95600C3AD8C45EB951D999B6D899E5A", 00:15:24.489 "uuid": "d95600c3-ad8c-45eb-951d-999b6d899e5a" 00:15:24.489 } 00:15:24.489 ] 00:15:24.489 }, 00:15:24.489 { 00:15:24.489 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:24.489 "subtype": "NVMe", 00:15:24.489 "listen_addresses": [ 00:15:24.489 { 00:15:24.489 "trtype": "VFIOUSER", 00:15:24.489 "adrfam": "IPv4", 00:15:24.489 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:24.489 "trsvcid": "0" 00:15:24.489 } 00:15:24.489 ], 00:15:24.489 "allow_any_host": true, 00:15:24.489 "hosts": [], 00:15:24.489 "serial_number": "SPDK2", 00:15:24.489 "model_number": "SPDK bdev Controller", 00:15:24.489 "max_namespaces": 32, 00:15:24.489 "min_cntlid": 1, 00:15:24.489 "max_cntlid": 65519, 00:15:24.490 "namespaces": [ 00:15:24.490 { 00:15:24.490 "nsid": 1, 00:15:24.490 "bdev_name": "Malloc2", 00:15:24.490 "name": "Malloc2", 00:15:24.490 "nguid": "5FF32A0D271441CBAB84BCC2D3AD5036", 00:15:24.490 "uuid": "5ff32a0d-2714-41cb-ab84-bcc2d3ad5036" 00:15:24.490 }, 00:15:24.490 { 00:15:24.490 "nsid": 2, 00:15:24.490 "bdev_name": "Malloc4", 00:15:24.490 "name": "Malloc4", 00:15:24.490 "nguid": "469BBBABF5084EC7B154167996CC9150", 00:15:24.490 "uuid": "469bbbab-f508-4ec7-b154-167996cc9150" 00:15:24.490 } 00:15:24.490 ] 00:15:24.490 } 00:15:24.490 ] 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 599303 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 589681 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 589681 ']' 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 589681 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 589681 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 589681' 00:15:24.490 killing process with pid 589681 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 589681 00:15:24.490 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 589681 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=599483 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 599483' 00:15:24.751 Process pid: 599483 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 599483 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 599483 ']' 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:24.751 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:24.751 [2024-11-06 13:39:48.072468] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:24.751 [2024-11-06 13:39:48.073420] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:15:24.751 [2024-11-06 13:39:48.073462] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.013 [2024-11-06 13:39:48.146735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.013 [2024-11-06 13:39:48.182890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.013 [2024-11-06 13:39:48.182923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.013 [2024-11-06 13:39:48.182931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.013 [2024-11-06 13:39:48.182937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.013 [2024-11-06 13:39:48.182943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.013 [2024-11-06 13:39:48.184430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.013 [2024-11-06 13:39:48.184544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.013 [2024-11-06 13:39:48.184702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.013 [2024-11-06 13:39:48.184702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.013 [2024-11-06 13:39:48.239770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:25.013 [2024-11-06 13:39:48.239901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:25.013 [2024-11-06 13:39:48.240949] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:25.013 [2024-11-06 13:39:48.241860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:25.013 [2024-11-06 13:39:48.241932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:25.584 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:25.584 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:25.584 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:26.527 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:26.788 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:26.788 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:26.788 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:26.788 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:26.788 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:27.125 Malloc1 00:15:27.125 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:27.412 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:27.412 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:27.731 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:27.731 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:27.731 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:27.731 Malloc2 00:15:27.731 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:28.028 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:28.028 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 599483 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 599483 ']' 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 599483 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 599483 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 599483' 00:15:28.317 killing process with pid 599483 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 599483 00:15:28.317 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 599483 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:28.597 00:15:28.597 real 0m51.371s 00:15:28.597 user 3m16.976s 00:15:28.597 sys 0m2.721s 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 ************************************ 00:15:28.597 END TEST nvmf_vfio_user 00:15:28.597 ************************************ 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.597 ************************************ 00:15:28.597 START TEST nvmf_vfio_user_nvme_compliance 00:15:28.597 ************************************ 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:28.597 * Looking for test storage... 00:15:28.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:28.597 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:28.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.868 --rc genhtml_branch_coverage=1 00:15:28.868 --rc genhtml_function_coverage=1 00:15:28.868 --rc genhtml_legend=1 00:15:28.868 --rc geninfo_all_blocks=1 00:15:28.868 --rc geninfo_unexecuted_blocks=1 00:15:28.868 00:15:28.868 ' 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:28.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.868 --rc genhtml_branch_coverage=1 00:15:28.868 --rc genhtml_function_coverage=1 00:15:28.868 --rc genhtml_legend=1 00:15:28.868 --rc geninfo_all_blocks=1 00:15:28.868 --rc geninfo_unexecuted_blocks=1 00:15:28.868 00:15:28.868 ' 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:28.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.868 --rc genhtml_branch_coverage=1 00:15:28.868 --rc genhtml_function_coverage=1 00:15:28.868 --rc genhtml_legend=1 00:15:28.868 --rc geninfo_all_blocks=1 00:15:28.868 --rc geninfo_unexecuted_blocks=1 00:15:28.868 00:15:28.868 ' 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:28.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.868 --rc genhtml_branch_coverage=1 00:15:28.868 --rc genhtml_function_coverage=1 00:15:28.868 --rc genhtml_legend=1 00:15:28.868 --rc geninfo_all_blocks=1 00:15:28.868 --rc geninfo_unexecuted_blocks=1 00:15:28.868 00:15:28.868 ' 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.868 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=600407 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 600407' 00:15:28.869 Process pid: 600407 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 600407 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 600407 ']' 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.869 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.869 [2024-11-06 13:39:52.156235] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:15:28.869 [2024-11-06 13:39:52.156308] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.869 [2024-11-06 13:39:52.232010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:29.177 [2024-11-06 13:39:52.273271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.177 [2024-11-06 13:39:52.273305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.177 [2024-11-06 13:39:52.273313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.177 [2024-11-06 13:39:52.273320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.177 [2024-11-06 13:39:52.273329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.177 [2024-11-06 13:39:52.274785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.177 [2024-11-06 13:39:52.274857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.177 [2024-11-06 13:39:52.274860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.839 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:29.839 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:29.839 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.779 malloc0 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.779 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.780 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.780 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:30.780 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.780 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.780 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.780 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:31.040 00:15:31.040 00:15:31.040 CUnit - A unit testing framework for C - Version 2.1-3 00:15:31.040 http://cunit.sourceforge.net/ 00:15:31.040 00:15:31.040 00:15:31.040 Suite: nvme_compliance 00:15:31.040 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 13:39:54.207153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.040 [2024-11-06 13:39:54.208494] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:31.041 [2024-11-06 13:39:54.208504] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:31.041 [2024-11-06 13:39:54.208509] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:31.041 [2024-11-06 13:39:54.210168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.041 passed 00:15:31.041 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 13:39:54.306790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.041 [2024-11-06 13:39:54.309810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.041 passed 00:15:31.041 Test: admin_identify_ns ...[2024-11-06 13:39:54.406001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.301 [2024-11-06 13:39:54.465760] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:31.301 [2024-11-06 13:39:54.473758] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:31.301 [2024-11-06 13:39:54.494870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.301 passed 00:15:31.301 Test: admin_get_features_mandatory_features ...[2024-11-06 13:39:54.588879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.301 [2024-11-06 13:39:54.591896] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.301 passed 00:15:31.562 Test: admin_get_features_optional_features ...[2024-11-06 13:39:54.684427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.562 [2024-11-06 13:39:54.687444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.562 passed 00:15:31.562 Test: admin_set_features_number_of_queues ...[2024-11-06 13:39:54.780610] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.562 [2024-11-06 13:39:54.884862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.562 passed 00:15:31.822 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 13:39:54.978901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.822 [2024-11-06 13:39:54.981918] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.822 passed 00:15:31.822 Test: admin_get_log_page_with_lpo ...[2024-11-06 13:39:55.075041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.822 [2024-11-06 13:39:55.142760] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:31.823 [2024-11-06 13:39:55.155802] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.823 passed 00:15:32.083 Test: fabric_property_get ...[2024-11-06 13:39:55.247847] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.083 [2024-11-06 13:39:55.249091] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:32.083 [2024-11-06 13:39:55.250867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.083 passed 00:15:32.083 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 13:39:55.347537] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.083 [2024-11-06 13:39:55.348790] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:32.083 [2024-11-06 13:39:55.350561] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.083 passed 00:15:32.083 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 13:39:55.442006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.344 [2024-11-06 13:39:55.520753] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:32.344 [2024-11-06 13:39:55.536758] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:32.344 [2024-11-06 13:39:55.541838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.344 passed 00:15:32.344 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 13:39:55.633423] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.344 [2024-11-06 13:39:55.634672] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:32.345 [2024-11-06 13:39:55.636444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.345 passed 00:15:32.604 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 13:39:55.728027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.604 [2024-11-06 13:39:55.803751] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:32.604 [2024-11-06 13:39:55.827752] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:32.604 [2024-11-06 13:39:55.832843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.604 passed 00:15:32.605 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 13:39:55.926819] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.605 [2024-11-06 13:39:55.928058] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:32.605 [2024-11-06 13:39:55.928077] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:32.605 [2024-11-06 13:39:55.929833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.605 passed 00:15:32.865 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 13:39:56.022980] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.865 [2024-11-06 13:39:56.114751] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:32.865 [2024-11-06 13:39:56.122754] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:32.865 [2024-11-06 13:39:56.130756] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:32.865 [2024-11-06 13:39:56.138753] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:32.865 [2024-11-06 13:39:56.167840] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.865 passed 00:15:33.126 Test: admin_create_io_sq_verify_pc ...[2024-11-06 13:39:56.261817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.126 [2024-11-06 13:39:56.281762] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:33.126 [2024-11-06 13:39:56.298987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.126 passed 00:15:33.126 Test: admin_create_io_qp_max_qps ...[2024-11-06 13:39:56.390496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.510 [2024-11-06 13:39:57.494756] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:34.770 [2024-11-06 13:39:57.888769] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.770 passed 00:15:34.770 Test: admin_create_io_sq_shared_cq ...[2024-11-06 13:39:57.981889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.770 [2024-11-06 13:39:58.113753] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:35.031 [2024-11-06 13:39:58.150804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.031 passed 00:15:35.031 00:15:35.031 Run Summary: Type Total Ran Passed Failed Inactive 00:15:35.031 suites 1 1 n/a 0 0 00:15:35.031 tests 18 18 18 0 0 00:15:35.031 asserts 360 360 360 0 n/a 00:15:35.031 00:15:35.031 Elapsed time = 1.652 seconds 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 600407 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 600407 ']' 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 600407 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 600407 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 600407' 00:15:35.031 killing process with pid 600407 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 600407 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 600407 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:35.031 00:15:35.031 real 0m6.544s 00:15:35.031 user 0m18.579s 00:15:35.031 sys 0m0.513s 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:35.031 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.031 ************************************ 00:15:35.031 END TEST nvmf_vfio_user_nvme_compliance 00:15:35.031 ************************************ 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.292 ************************************ 00:15:35.292 START TEST nvmf_vfio_user_fuzz 00:15:35.292 ************************************ 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:35.292 * Looking for test storage... 00:15:35.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:35.292 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:35.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.554 --rc genhtml_branch_coverage=1 00:15:35.554 --rc genhtml_function_coverage=1 00:15:35.554 --rc genhtml_legend=1 00:15:35.554 --rc geninfo_all_blocks=1 00:15:35.554 --rc geninfo_unexecuted_blocks=1 00:15:35.554 00:15:35.554 ' 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:35.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.554 --rc genhtml_branch_coverage=1 00:15:35.554 --rc genhtml_function_coverage=1 00:15:35.554 --rc genhtml_legend=1 00:15:35.554 --rc geninfo_all_blocks=1 00:15:35.554 --rc geninfo_unexecuted_blocks=1 00:15:35.554 00:15:35.554 ' 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:35.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.554 --rc genhtml_branch_coverage=1 00:15:35.554 --rc genhtml_function_coverage=1 00:15:35.554 --rc genhtml_legend=1 00:15:35.554 --rc geninfo_all_blocks=1 00:15:35.554 --rc geninfo_unexecuted_blocks=1 00:15:35.554 00:15:35.554 ' 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:35.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.554 --rc genhtml_branch_coverage=1 00:15:35.554 --rc genhtml_function_coverage=1 00:15:35.554 --rc genhtml_legend=1 00:15:35.554 --rc geninfo_all_blocks=1 00:15:35.554 --rc geninfo_unexecuted_blocks=1 00:15:35.554 00:15:35.554 ' 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:35.554 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:35.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=601820 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 601820' 00:15:35.555 Process pid: 601820 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 601820 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 601820 ']' 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:35.555 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.496 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:36.496 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:36.496 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.439 malloc0 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:37.439 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:09.558 Fuzzing completed. Shutting down the fuzz application 00:16:09.558 00:16:09.558 Dumping successful admin opcodes: 00:16:09.558 8, 9, 10, 24, 00:16:09.558 Dumping successful io opcodes: 00:16:09.558 0, 00:16:09.558 NS: 0x20000081ef00 I/O qp, Total commands completed: 1137781, total successful commands: 4483, random_seed: 3864214912 00:16:09.558 NS: 0x20000081ef00 admin qp, Total commands completed: 143112, total successful commands: 1162, random_seed: 2809844480 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 601820 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 601820 ']' 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 601820 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:09.558 13:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 601820 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 601820' 00:16:09.558 killing process with pid 601820 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 601820 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 601820 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:09.558 00:16:09.558 real 0m33.758s 00:16:09.558 user 0m38.115s 00:16:09.558 sys 0m26.236s 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:09.558 ************************************ 00:16:09.558 END TEST nvmf_vfio_user_fuzz 00:16:09.558 ************************************ 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:09.558 13:40:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.559 ************************************ 00:16:09.559 START TEST nvmf_auth_target 00:16:09.559 ************************************ 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:09.559 * Looking for test storage... 00:16:09.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:09.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.559 --rc genhtml_branch_coverage=1 00:16:09.559 --rc genhtml_function_coverage=1 00:16:09.559 --rc genhtml_legend=1 00:16:09.559 --rc geninfo_all_blocks=1 00:16:09.559 --rc geninfo_unexecuted_blocks=1 00:16:09.559 00:16:09.559 ' 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:09.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.559 --rc genhtml_branch_coverage=1 00:16:09.559 --rc genhtml_function_coverage=1 00:16:09.559 --rc genhtml_legend=1 00:16:09.559 --rc geninfo_all_blocks=1 00:16:09.559 --rc geninfo_unexecuted_blocks=1 00:16:09.559 00:16:09.559 ' 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:09.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.559 --rc genhtml_branch_coverage=1 00:16:09.559 --rc genhtml_function_coverage=1 00:16:09.559 --rc genhtml_legend=1 00:16:09.559 --rc geninfo_all_blocks=1 00:16:09.559 --rc geninfo_unexecuted_blocks=1 00:16:09.559 00:16:09.559 ' 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:09.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.559 --rc genhtml_branch_coverage=1 00:16:09.559 --rc genhtml_function_coverage=1 00:16:09.559 --rc genhtml_legend=1 00:16:09.559 --rc geninfo_all_blocks=1 00:16:09.559 --rc geninfo_unexecuted_blocks=1 00:16:09.559 00:16:09.559 ' 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.559 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:09.560 13:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:16.156 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:16.156 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:16.156 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:16.157 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:16.157 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.157 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:16.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:16:16.421 00:16:16.421 --- 10.0.0.2 ping statistics --- 00:16:16.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.421 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:16:16.421 00:16:16.421 --- 10.0.0.1 ping statistics --- 00:16:16.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.421 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:16.421 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=611924 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 611924 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 611924 ']' 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:16.686 13:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=612151 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=52a0fdefa35cd8012efdfcbbf37e86182edf92db7fa50f11 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.x79 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 52a0fdefa35cd8012efdfcbbf37e86182edf92db7fa50f11 0 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 52a0fdefa35cd8012efdfcbbf37e86182edf92db7fa50f11 0 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=52a0fdefa35cd8012efdfcbbf37e86182edf92db7fa50f11 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.x79 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.x79 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.x79 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f90ed864ebfa09adca2bbb87cd903fd4d4449c08f13714ebddb814b11a6e48d6 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Q6U 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f90ed864ebfa09adca2bbb87cd903fd4d4449c08f13714ebddb814b11a6e48d6 3 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f90ed864ebfa09adca2bbb87cd903fd4d4449c08f13714ebddb814b11a6e48d6 3 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f90ed864ebfa09adca2bbb87cd903fd4d4449c08f13714ebddb814b11a6e48d6 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Q6U 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Q6U 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Q6U 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2ff13f3d718db390131e55a560f186e9 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rNG 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2ff13f3d718db390131e55a560f186e9 1 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2ff13f3d718db390131e55a560f186e9 1 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2ff13f3d718db390131e55a560f186e9 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rNG 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rNG 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.rNG 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:17.631 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f92f49a15072f2729e9980f2a40127bed5925de85c1bb58e 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VKy 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f92f49a15072f2729e9980f2a40127bed5925de85c1bb58e 2 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f92f49a15072f2729e9980f2a40127bed5925de85c1bb58e 2 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f92f49a15072f2729e9980f2a40127bed5925de85c1bb58e 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VKy 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VKy 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.VKy 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5073a54ca4a48d3c834e59a0e0f88dd73c9cb5cea3c0d07e 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3kV 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5073a54ca4a48d3c834e59a0e0f88dd73c9cb5cea3c0d07e 2 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5073a54ca4a48d3c834e59a0e0f88dd73c9cb5cea3c0d07e 2 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5073a54ca4a48d3c834e59a0e0f88dd73c9cb5cea3c0d07e 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:17.632 13:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3kV 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3kV 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.3kV 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f79ce1ce103cff56ee5d7bd72c206786 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JXd 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f79ce1ce103cff56ee5d7bd72c206786 1 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f79ce1ce103cff56ee5d7bd72c206786 1 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f79ce1ce103cff56ee5d7bd72c206786 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JXd 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JXd 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.JXd 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=81ab58d743101c1680e28be7d6f92629dc895a03eadf8f184340786b7fafdb01 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ndl 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 81ab58d743101c1680e28be7d6f92629dc895a03eadf8f184340786b7fafdb01 3 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 81ab58d743101c1680e28be7d6f92629dc895a03eadf8f184340786b7fafdb01 3 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=81ab58d743101c1680e28be7d6f92629dc895a03eadf8f184340786b7fafdb01 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ndl 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ndl 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ndl 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 611924 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 611924 ']' 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.895 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:17.896 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.896 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:17.896 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 612151 /var/tmp/host.sock 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 612151 ']' 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:18.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x79 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.x79 00:16:18.158 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.x79 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Q6U ]] 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q6U 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q6U 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q6U 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rNG 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rNG 00:16:18.583 13:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rNG 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.VKy ]] 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VKy 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VKy 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VKy 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3kV 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.3kV 00:16:18.847 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.3kV 00:16:19.108 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.JXd ]] 00:16:19.108 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JXd 00:16:19.108 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.108 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JXd 00:16:19.108 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JXd 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ndl 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ndl 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ndl 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.370 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.632 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.894 00:16:19.894 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.894 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.894 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.155 { 00:16:20.155 "cntlid": 1, 00:16:20.155 "qid": 0, 00:16:20.155 "state": "enabled", 00:16:20.155 "thread": "nvmf_tgt_poll_group_000", 00:16:20.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:20.155 "listen_address": { 00:16:20.155 "trtype": "TCP", 00:16:20.155 "adrfam": "IPv4", 00:16:20.155 "traddr": "10.0.0.2", 00:16:20.155 "trsvcid": "4420" 00:16:20.155 }, 00:16:20.155 "peer_address": { 00:16:20.155 "trtype": "TCP", 00:16:20.155 "adrfam": "IPv4", 00:16:20.155 "traddr": "10.0.0.1", 00:16:20.155 "trsvcid": "45424" 00:16:20.155 }, 00:16:20.155 "auth": { 00:16:20.155 "state": "completed", 00:16:20.155 "digest": "sha256", 00:16:20.155 "dhgroup": "null" 00:16:20.155 } 00:16:20.155 } 00:16:20.155 ]' 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.155 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.415 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:20.415 13:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.356 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.616 00:16:21.616 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.616 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.616 13:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.877 { 00:16:21.877 "cntlid": 3, 00:16:21.877 "qid": 0, 00:16:21.877 "state": "enabled", 00:16:21.877 "thread": "nvmf_tgt_poll_group_000", 00:16:21.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:21.877 "listen_address": { 00:16:21.877 "trtype": "TCP", 00:16:21.877 "adrfam": "IPv4", 00:16:21.877 "traddr": "10.0.0.2", 00:16:21.877 "trsvcid": "4420" 00:16:21.877 }, 00:16:21.877 "peer_address": { 00:16:21.877 "trtype": "TCP", 00:16:21.877 "adrfam": "IPv4", 00:16:21.877 "traddr": "10.0.0.1", 00:16:21.877 "trsvcid": "45462" 00:16:21.877 }, 00:16:21.877 "auth": { 00:16:21.877 "state": "completed", 00:16:21.877 "digest": "sha256", 00:16:21.877 "dhgroup": "null" 00:16:21.877 } 00:16:21.877 } 00:16:21.877 ]' 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.877 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.138 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.138 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:22.138 13:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.078 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.338 00:16:23.338 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.338 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.338 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.599 { 00:16:23.599 "cntlid": 5, 00:16:23.599 "qid": 0, 00:16:23.599 "state": "enabled", 00:16:23.599 "thread": "nvmf_tgt_poll_group_000", 00:16:23.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:23.599 "listen_address": { 00:16:23.599 "trtype": "TCP", 00:16:23.599 "adrfam": "IPv4", 00:16:23.599 "traddr": "10.0.0.2", 00:16:23.599 "trsvcid": "4420" 00:16:23.599 }, 00:16:23.599 "peer_address": { 00:16:23.599 "trtype": "TCP", 00:16:23.599 "adrfam": "IPv4", 00:16:23.599 "traddr": "10.0.0.1", 00:16:23.599 "trsvcid": "45486" 00:16:23.599 }, 00:16:23.599 "auth": { 00:16:23.599 "state": "completed", 00:16:23.599 "digest": "sha256", 00:16:23.599 "dhgroup": "null" 00:16:23.599 } 00:16:23.599 } 00:16:23.599 ]' 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.599 13:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.859 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:23.859 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:24.799 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.799 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.799 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.799 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.799 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.799 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.799 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.799 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.799 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.060 00:16:25.060 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.060 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.060 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.321 { 00:16:25.321 "cntlid": 7, 00:16:25.321 "qid": 0, 00:16:25.321 "state": "enabled", 00:16:25.321 "thread": "nvmf_tgt_poll_group_000", 00:16:25.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:25.321 "listen_address": { 00:16:25.321 "trtype": "TCP", 00:16:25.321 "adrfam": "IPv4", 00:16:25.321 "traddr": "10.0.0.2", 00:16:25.321 "trsvcid": "4420" 00:16:25.321 }, 00:16:25.321 "peer_address": { 00:16:25.321 "trtype": "TCP", 00:16:25.321 "adrfam": "IPv4", 00:16:25.321 "traddr": "10.0.0.1", 00:16:25.321 "trsvcid": "59504" 00:16:25.321 }, 00:16:25.321 "auth": { 00:16:25.321 "state": "completed", 00:16:25.321 "digest": "sha256", 00:16:25.321 "dhgroup": "null" 00:16:25.321 } 00:16:25.321 } 00:16:25.321 ]' 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.321 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.581 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:25.582 13:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.522 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.782 00:16:26.782 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.782 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.782 13:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.042 { 00:16:27.042 "cntlid": 9, 00:16:27.042 "qid": 0, 00:16:27.042 "state": "enabled", 00:16:27.042 "thread": "nvmf_tgt_poll_group_000", 00:16:27.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:27.042 "listen_address": { 00:16:27.042 "trtype": "TCP", 00:16:27.042 "adrfam": "IPv4", 00:16:27.042 "traddr": "10.0.0.2", 00:16:27.042 "trsvcid": "4420" 00:16:27.042 }, 00:16:27.042 "peer_address": { 00:16:27.042 "trtype": "TCP", 00:16:27.042 "adrfam": "IPv4", 00:16:27.042 "traddr": "10.0.0.1", 00:16:27.042 "trsvcid": "59530" 00:16:27.042 }, 00:16:27.042 "auth": { 00:16:27.042 "state": "completed", 00:16:27.042 "digest": "sha256", 00:16:27.042 "dhgroup": "ffdhe2048" 00:16:27.042 } 00:16:27.042 } 00:16:27.042 ]' 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.042 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.302 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:27.302 13:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:27.872 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.132 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.133 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.133 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.133 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.133 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.133 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.133 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.133 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.133 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.393 00:16:28.393 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.393 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.393 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.653 { 00:16:28.653 "cntlid": 11, 00:16:28.653 "qid": 0, 00:16:28.653 "state": "enabled", 00:16:28.653 "thread": "nvmf_tgt_poll_group_000", 00:16:28.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:28.653 "listen_address": { 00:16:28.653 "trtype": "TCP", 00:16:28.653 "adrfam": "IPv4", 00:16:28.653 "traddr": "10.0.0.2", 00:16:28.653 "trsvcid": "4420" 00:16:28.653 }, 00:16:28.653 "peer_address": { 00:16:28.653 "trtype": "TCP", 00:16:28.653 "adrfam": "IPv4", 00:16:28.653 "traddr": "10.0.0.1", 00:16:28.653 "trsvcid": "59548" 00:16:28.653 }, 00:16:28.653 "auth": { 00:16:28.653 "state": "completed", 00:16:28.653 "digest": "sha256", 00:16:28.653 "dhgroup": "ffdhe2048" 00:16:28.653 } 00:16:28.653 } 00:16:28.653 ]' 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.653 13:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.653 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.653 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.653 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.914 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:28.914 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:29.853 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.853 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.853 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.853 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.853 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.853 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.853 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.853 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.853 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.854 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.114 00:16:30.114 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.114 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.114 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.374 { 00:16:30.374 "cntlid": 13, 00:16:30.374 "qid": 0, 00:16:30.374 "state": "enabled", 00:16:30.374 "thread": "nvmf_tgt_poll_group_000", 00:16:30.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:30.374 "listen_address": { 00:16:30.374 "trtype": "TCP", 00:16:30.374 "adrfam": "IPv4", 00:16:30.374 "traddr": "10.0.0.2", 00:16:30.374 "trsvcid": "4420" 00:16:30.374 }, 00:16:30.374 "peer_address": { 00:16:30.374 "trtype": "TCP", 00:16:30.374 "adrfam": "IPv4", 00:16:30.374 "traddr": "10.0.0.1", 00:16:30.374 "trsvcid": "59560" 00:16:30.374 }, 00:16:30.374 "auth": { 00:16:30.374 "state": "completed", 00:16:30.374 "digest": "sha256", 00:16:30.374 "dhgroup": "ffdhe2048" 00:16:30.374 } 00:16:30.374 } 00:16:30.374 ]' 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.374 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.635 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:30.635 13:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:31.576 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.576 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.576 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.576 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.576 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.576 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.576 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.576 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.577 13:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.837 00:16:31.837 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.837 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.837 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.097 { 00:16:32.097 "cntlid": 15, 00:16:32.097 "qid": 0, 00:16:32.097 "state": "enabled", 00:16:32.097 "thread": "nvmf_tgt_poll_group_000", 00:16:32.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:32.097 "listen_address": { 00:16:32.097 "trtype": "TCP", 00:16:32.097 "adrfam": "IPv4", 00:16:32.097 "traddr": "10.0.0.2", 00:16:32.097 "trsvcid": "4420" 00:16:32.097 }, 00:16:32.097 "peer_address": { 00:16:32.097 "trtype": "TCP", 00:16:32.097 "adrfam": "IPv4", 00:16:32.097 "traddr": "10.0.0.1", 00:16:32.097 "trsvcid": "59574" 00:16:32.097 }, 00:16:32.097 "auth": { 00:16:32.097 "state": "completed", 00:16:32.097 "digest": "sha256", 00:16:32.097 "dhgroup": "ffdhe2048" 00:16:32.097 } 00:16:32.097 } 00:16:32.097 ]' 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.097 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.098 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.098 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.098 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.098 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.358 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:32.358 13:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:33.299 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.299 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.299 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.299 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.299 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.299 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.300 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.561 00:16:33.561 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.561 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.561 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.822 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.822 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.822 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.822 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.822 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.822 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.822 { 00:16:33.822 "cntlid": 17, 00:16:33.822 "qid": 0, 00:16:33.822 "state": "enabled", 00:16:33.822 "thread": "nvmf_tgt_poll_group_000", 00:16:33.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.822 "listen_address": { 00:16:33.822 "trtype": "TCP", 00:16:33.822 "adrfam": "IPv4", 00:16:33.822 "traddr": "10.0.0.2", 00:16:33.822 "trsvcid": "4420" 00:16:33.822 }, 00:16:33.822 "peer_address": { 00:16:33.822 "trtype": "TCP", 00:16:33.822 "adrfam": "IPv4", 00:16:33.822 "traddr": "10.0.0.1", 00:16:33.822 "trsvcid": "59604" 00:16:33.822 }, 00:16:33.822 "auth": { 00:16:33.822 "state": "completed", 00:16:33.822 "digest": "sha256", 00:16:33.822 "dhgroup": "ffdhe3072" 00:16:33.822 } 00:16:33.822 } 00:16:33.822 ]' 00:16:33.823 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.823 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.823 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.823 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.823 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.823 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.823 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.823 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.083 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:34.083 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:34.653 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.914 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.174 00:16:35.174 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.174 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.174 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.435 { 00:16:35.435 "cntlid": 19, 00:16:35.435 "qid": 0, 00:16:35.435 "state": "enabled", 00:16:35.435 "thread": "nvmf_tgt_poll_group_000", 00:16:35.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:35.435 "listen_address": { 00:16:35.435 "trtype": "TCP", 00:16:35.435 "adrfam": "IPv4", 00:16:35.435 "traddr": "10.0.0.2", 00:16:35.435 "trsvcid": "4420" 00:16:35.435 }, 00:16:35.435 "peer_address": { 00:16:35.435 "trtype": "TCP", 00:16:35.435 "adrfam": "IPv4", 00:16:35.435 "traddr": "10.0.0.1", 00:16:35.435 "trsvcid": "45450" 00:16:35.435 }, 00:16:35.435 "auth": { 00:16:35.435 "state": "completed", 00:16:35.435 "digest": "sha256", 00:16:35.435 "dhgroup": "ffdhe3072" 00:16:35.435 } 00:16:35.435 } 00:16:35.435 ]' 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.435 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.698 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:35.698 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.640 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.641 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.641 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.641 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.641 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.901 00:16:36.901 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.901 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.901 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.161 { 00:16:37.161 "cntlid": 21, 00:16:37.161 "qid": 0, 00:16:37.161 "state": "enabled", 00:16:37.161 "thread": "nvmf_tgt_poll_group_000", 00:16:37.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:37.161 "listen_address": { 00:16:37.161 "trtype": "TCP", 00:16:37.161 "adrfam": "IPv4", 00:16:37.161 "traddr": "10.0.0.2", 00:16:37.161 "trsvcid": "4420" 00:16:37.161 }, 00:16:37.161 "peer_address": { 00:16:37.161 "trtype": "TCP", 00:16:37.161 "adrfam": "IPv4", 00:16:37.161 "traddr": "10.0.0.1", 00:16:37.161 "trsvcid": "45484" 00:16:37.161 }, 00:16:37.161 "auth": { 00:16:37.161 "state": "completed", 00:16:37.161 "digest": "sha256", 00:16:37.161 "dhgroup": "ffdhe3072" 00:16:37.161 } 00:16:37.161 } 00:16:37.161 ]' 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.161 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.421 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:37.421 13:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:37.992 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.252 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.513 00:16:38.513 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.513 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.513 13:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.773 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.773 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.773 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.773 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.773 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.773 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.773 { 00:16:38.773 "cntlid": 23, 00:16:38.773 "qid": 0, 00:16:38.773 "state": "enabled", 00:16:38.773 "thread": "nvmf_tgt_poll_group_000", 00:16:38.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.774 "listen_address": { 00:16:38.774 "trtype": "TCP", 00:16:38.774 "adrfam": "IPv4", 00:16:38.774 "traddr": "10.0.0.2", 00:16:38.774 "trsvcid": "4420" 00:16:38.774 }, 00:16:38.774 "peer_address": { 00:16:38.774 "trtype": "TCP", 00:16:38.774 "adrfam": "IPv4", 00:16:38.774 "traddr": "10.0.0.1", 00:16:38.774 "trsvcid": "45498" 00:16:38.774 }, 00:16:38.774 "auth": { 00:16:38.774 "state": "completed", 00:16:38.774 "digest": "sha256", 00:16:38.774 "dhgroup": "ffdhe3072" 00:16:38.774 } 00:16:38.774 } 00:16:38.774 ]' 00:16:38.774 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.774 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.774 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.774 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.774 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.034 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.034 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.034 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.034 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:39.034 13:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.974 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.234 00:16:40.234 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.234 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.234 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.494 { 00:16:40.494 "cntlid": 25, 00:16:40.494 "qid": 0, 00:16:40.494 "state": "enabled", 00:16:40.494 "thread": "nvmf_tgt_poll_group_000", 00:16:40.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.494 "listen_address": { 00:16:40.494 "trtype": "TCP", 00:16:40.494 "adrfam": "IPv4", 00:16:40.494 "traddr": "10.0.0.2", 00:16:40.494 "trsvcid": "4420" 00:16:40.494 }, 00:16:40.494 "peer_address": { 00:16:40.494 "trtype": "TCP", 00:16:40.494 "adrfam": "IPv4", 00:16:40.494 "traddr": "10.0.0.1", 00:16:40.494 "trsvcid": "45524" 00:16:40.494 }, 00:16:40.494 "auth": { 00:16:40.494 "state": "completed", 00:16:40.494 "digest": "sha256", 00:16:40.494 "dhgroup": "ffdhe4096" 00:16:40.494 } 00:16:40.494 } 00:16:40.494 ]' 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.494 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.755 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.755 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.755 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.755 13:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.755 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:40.755 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:41.803 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.803 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.803 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.803 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.803 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.803 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.803 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.803 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.803 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.079 00:16:42.079 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.079 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.079 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.370 { 00:16:42.370 "cntlid": 27, 00:16:42.370 "qid": 0, 00:16:42.370 "state": "enabled", 00:16:42.370 "thread": "nvmf_tgt_poll_group_000", 00:16:42.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.370 "listen_address": { 00:16:42.370 "trtype": "TCP", 00:16:42.370 "adrfam": "IPv4", 00:16:42.370 "traddr": "10.0.0.2", 00:16:42.370 "trsvcid": "4420" 00:16:42.370 }, 00:16:42.370 "peer_address": { 00:16:42.370 "trtype": "TCP", 00:16:42.370 "adrfam": "IPv4", 00:16:42.370 "traddr": "10.0.0.1", 00:16:42.370 "trsvcid": "45538" 00:16:42.370 }, 00:16:42.370 "auth": { 00:16:42.370 "state": "completed", 00:16:42.370 "digest": "sha256", 00:16:42.370 "dhgroup": "ffdhe4096" 00:16:42.370 } 00:16:42.370 } 00:16:42.370 ]' 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.370 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.635 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:42.635 13:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.576 13:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.836 00:16:43.836 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.836 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.836 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.098 { 00:16:44.098 "cntlid": 29, 00:16:44.098 "qid": 0, 00:16:44.098 "state": "enabled", 00:16:44.098 "thread": "nvmf_tgt_poll_group_000", 00:16:44.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.098 "listen_address": { 00:16:44.098 "trtype": "TCP", 00:16:44.098 "adrfam": "IPv4", 00:16:44.098 "traddr": "10.0.0.2", 00:16:44.098 "trsvcid": "4420" 00:16:44.098 }, 00:16:44.098 "peer_address": { 00:16:44.098 "trtype": "TCP", 00:16:44.098 "adrfam": "IPv4", 00:16:44.098 "traddr": "10.0.0.1", 00:16:44.098 "trsvcid": "45570" 00:16:44.098 }, 00:16:44.098 "auth": { 00:16:44.098 "state": "completed", 00:16:44.098 "digest": "sha256", 00:16:44.098 "dhgroup": "ffdhe4096" 00:16:44.098 } 00:16:44.098 } 00:16:44.098 ]' 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.098 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.359 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:44.359 13:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:45.300 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.301 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.562 00:16:45.562 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.562 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.562 13:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.823 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.823 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.823 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.823 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.823 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.823 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.823 { 00:16:45.824 "cntlid": 31, 00:16:45.824 "qid": 0, 00:16:45.824 "state": "enabled", 00:16:45.824 "thread": "nvmf_tgt_poll_group_000", 00:16:45.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.824 "listen_address": { 00:16:45.824 "trtype": "TCP", 00:16:45.824 "adrfam": "IPv4", 00:16:45.824 "traddr": "10.0.0.2", 00:16:45.824 "trsvcid": "4420" 00:16:45.824 }, 00:16:45.824 "peer_address": { 00:16:45.824 "trtype": "TCP", 00:16:45.824 "adrfam": "IPv4", 00:16:45.824 "traddr": "10.0.0.1", 00:16:45.824 "trsvcid": "49888" 00:16:45.824 }, 00:16:45.824 "auth": { 00:16:45.824 "state": "completed", 00:16:45.824 "digest": "sha256", 00:16:45.824 "dhgroup": "ffdhe4096" 00:16:45.824 } 00:16:45.824 } 00:16:45.824 ]' 00:16:45.824 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.824 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.824 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.824 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.824 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.824 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.824 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.824 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.085 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:46.086 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:47.027 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.027 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.027 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.028 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.601 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.601 { 00:16:47.601 "cntlid": 33, 00:16:47.601 "qid": 0, 00:16:47.601 "state": "enabled", 00:16:47.601 "thread": "nvmf_tgt_poll_group_000", 00:16:47.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.601 "listen_address": { 00:16:47.601 "trtype": "TCP", 00:16:47.601 "adrfam": "IPv4", 00:16:47.601 "traddr": "10.0.0.2", 00:16:47.601 "trsvcid": "4420" 00:16:47.601 }, 00:16:47.601 "peer_address": { 00:16:47.601 "trtype": "TCP", 00:16:47.601 "adrfam": "IPv4", 00:16:47.601 "traddr": "10.0.0.1", 00:16:47.601 "trsvcid": "49914" 00:16:47.601 }, 00:16:47.601 "auth": { 00:16:47.601 "state": "completed", 00:16:47.601 "digest": "sha256", 00:16:47.601 "dhgroup": "ffdhe6144" 00:16:47.601 } 00:16:47.601 } 00:16:47.601 ]' 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.601 13:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.862 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.862 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.862 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.862 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:47.862 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:48.804 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.804 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.804 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.804 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.804 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.804 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.804 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.804 13:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.804 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:48.804 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.804 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.804 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.804 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.804 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.804 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.804 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.804 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.064 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.064 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.064 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.064 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.324 00:16:49.324 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.324 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.324 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.585 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.585 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.585 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.585 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.586 { 00:16:49.586 "cntlid": 35, 00:16:49.586 "qid": 0, 00:16:49.586 "state": "enabled", 00:16:49.586 "thread": "nvmf_tgt_poll_group_000", 00:16:49.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.586 "listen_address": { 00:16:49.586 "trtype": "TCP", 00:16:49.586 "adrfam": "IPv4", 00:16:49.586 "traddr": "10.0.0.2", 00:16:49.586 "trsvcid": "4420" 00:16:49.586 }, 00:16:49.586 "peer_address": { 00:16:49.586 "trtype": "TCP", 00:16:49.586 "adrfam": "IPv4", 00:16:49.586 "traddr": "10.0.0.1", 00:16:49.586 "trsvcid": "49946" 00:16:49.586 }, 00:16:49.586 "auth": { 00:16:49.586 "state": "completed", 00:16:49.586 "digest": "sha256", 00:16:49.586 "dhgroup": "ffdhe6144" 00:16:49.586 } 00:16:49.586 } 00:16:49.586 ]' 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.586 13:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.846 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:49.846 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:50.787 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.787 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.787 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.787 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.787 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.787 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.787 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.787 13:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.787 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.047 00:16:51.047 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.047 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.047 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.308 { 00:16:51.308 "cntlid": 37, 00:16:51.308 "qid": 0, 00:16:51.308 "state": "enabled", 00:16:51.308 "thread": "nvmf_tgt_poll_group_000", 00:16:51.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.308 "listen_address": { 00:16:51.308 "trtype": "TCP", 00:16:51.308 "adrfam": "IPv4", 00:16:51.308 "traddr": "10.0.0.2", 00:16:51.308 "trsvcid": "4420" 00:16:51.308 }, 00:16:51.308 "peer_address": { 00:16:51.308 "trtype": "TCP", 00:16:51.308 "adrfam": "IPv4", 00:16:51.308 "traddr": "10.0.0.1", 00:16:51.308 "trsvcid": "49992" 00:16:51.308 }, 00:16:51.308 "auth": { 00:16:51.308 "state": "completed", 00:16:51.308 "digest": "sha256", 00:16:51.308 "dhgroup": "ffdhe6144" 00:16:51.308 } 00:16:51.308 } 00:16:51.308 ]' 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.308 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.568 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.568 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.568 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.568 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:51.568 13:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.509 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.080 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.080 { 00:16:53.080 "cntlid": 39, 00:16:53.080 "qid": 0, 00:16:53.080 "state": "enabled", 00:16:53.080 "thread": "nvmf_tgt_poll_group_000", 00:16:53.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.080 "listen_address": { 00:16:53.080 "trtype": "TCP", 00:16:53.080 "adrfam": "IPv4", 00:16:53.080 "traddr": "10.0.0.2", 00:16:53.080 "trsvcid": "4420" 00:16:53.080 }, 00:16:53.080 "peer_address": { 00:16:53.080 "trtype": "TCP", 00:16:53.080 "adrfam": "IPv4", 00:16:53.080 "traddr": "10.0.0.1", 00:16:53.080 "trsvcid": "50018" 00:16:53.080 }, 00:16:53.080 "auth": { 00:16:53.080 "state": "completed", 00:16:53.080 "digest": "sha256", 00:16:53.080 "dhgroup": "ffdhe6144" 00:16:53.080 } 00:16:53.080 } 00:16:53.080 ]' 00:16:53.080 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.340 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.340 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.340 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.340 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.340 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.340 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.340 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.600 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:53.600 13:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:16:54.172 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.172 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.172 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.172 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.172 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.172 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.172 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.172 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.172 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.433 13:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.004 00:16:55.004 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.004 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.004 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.265 { 00:16:55.265 "cntlid": 41, 00:16:55.265 "qid": 0, 00:16:55.265 "state": "enabled", 00:16:55.265 "thread": "nvmf_tgt_poll_group_000", 00:16:55.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.265 "listen_address": { 00:16:55.265 "trtype": "TCP", 00:16:55.265 "adrfam": "IPv4", 00:16:55.265 "traddr": "10.0.0.2", 00:16:55.265 "trsvcid": "4420" 00:16:55.265 }, 00:16:55.265 "peer_address": { 00:16:55.265 "trtype": "TCP", 00:16:55.265 "adrfam": "IPv4", 00:16:55.265 "traddr": "10.0.0.1", 00:16:55.265 "trsvcid": "50032" 00:16:55.265 }, 00:16:55.265 "auth": { 00:16:55.265 "state": "completed", 00:16:55.265 "digest": "sha256", 00:16:55.265 "dhgroup": "ffdhe8192" 00:16:55.265 } 00:16:55.265 } 00:16:55.265 ]' 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.265 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.525 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:55.525 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:16:56.097 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.365 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.365 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.365 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.365 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.365 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.365 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.365 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.365 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:56.365 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.366 13:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.939 00:16:56.939 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.939 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.939 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.200 { 00:16:57.200 "cntlid": 43, 00:16:57.200 "qid": 0, 00:16:57.200 "state": "enabled", 00:16:57.200 "thread": "nvmf_tgt_poll_group_000", 00:16:57.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.200 "listen_address": { 00:16:57.200 "trtype": "TCP", 00:16:57.200 "adrfam": "IPv4", 00:16:57.200 "traddr": "10.0.0.2", 00:16:57.200 "trsvcid": "4420" 00:16:57.200 }, 00:16:57.200 "peer_address": { 00:16:57.200 "trtype": "TCP", 00:16:57.200 "adrfam": "IPv4", 00:16:57.200 "traddr": "10.0.0.1", 00:16:57.200 "trsvcid": "34882" 00:16:57.200 }, 00:16:57.200 "auth": { 00:16:57.200 "state": "completed", 00:16:57.200 "digest": "sha256", 00:16:57.200 "dhgroup": "ffdhe8192" 00:16:57.200 } 00:16:57.200 } 00:16:57.200 ]' 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.200 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.460 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:57.460 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.401 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.973 00:16:58.973 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.973 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.973 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.233 { 00:16:59.233 "cntlid": 45, 00:16:59.233 "qid": 0, 00:16:59.233 "state": "enabled", 00:16:59.233 "thread": "nvmf_tgt_poll_group_000", 00:16:59.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.233 "listen_address": { 00:16:59.233 "trtype": "TCP", 00:16:59.233 "adrfam": "IPv4", 00:16:59.233 "traddr": "10.0.0.2", 00:16:59.233 "trsvcid": "4420" 00:16:59.233 }, 00:16:59.233 "peer_address": { 00:16:59.233 "trtype": "TCP", 00:16:59.233 "adrfam": "IPv4", 00:16:59.233 "traddr": "10.0.0.1", 00:16:59.233 "trsvcid": "34908" 00:16:59.233 }, 00:16:59.233 "auth": { 00:16:59.233 "state": "completed", 00:16:59.233 "digest": "sha256", 00:16:59.233 "dhgroup": "ffdhe8192" 00:16:59.233 } 00:16:59.233 } 00:16:59.233 ]' 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.233 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.493 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:16:59.493 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.436 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.437 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.437 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:00.437 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.437 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.437 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.437 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.437 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.437 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.008 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.008 { 00:17:01.008 "cntlid": 47, 00:17:01.008 "qid": 0, 00:17:01.008 "state": "enabled", 00:17:01.008 "thread": "nvmf_tgt_poll_group_000", 00:17:01.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.008 "listen_address": { 00:17:01.008 "trtype": "TCP", 00:17:01.008 "adrfam": "IPv4", 00:17:01.008 "traddr": "10.0.0.2", 00:17:01.008 "trsvcid": "4420" 00:17:01.008 }, 00:17:01.008 "peer_address": { 00:17:01.008 "trtype": "TCP", 00:17:01.008 "adrfam": "IPv4", 00:17:01.008 "traddr": "10.0.0.1", 00:17:01.008 "trsvcid": "34946" 00:17:01.008 }, 00:17:01.008 "auth": { 00:17:01.008 "state": "completed", 00:17:01.008 "digest": "sha256", 00:17:01.008 "dhgroup": "ffdhe8192" 00:17:01.008 } 00:17:01.008 } 00:17:01.008 ]' 00:17:01.008 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.269 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.269 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.269 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.269 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.269 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.269 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.269 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.530 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:01.530 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.100 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.361 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.621 00:17:02.621 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.621 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.621 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.881 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.881 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.881 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.881 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.881 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.881 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.881 { 00:17:02.881 "cntlid": 49, 00:17:02.881 "qid": 0, 00:17:02.881 "state": "enabled", 00:17:02.881 "thread": "nvmf_tgt_poll_group_000", 00:17:02.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.881 "listen_address": { 00:17:02.881 "trtype": "TCP", 00:17:02.881 "adrfam": "IPv4", 00:17:02.881 "traddr": "10.0.0.2", 00:17:02.881 "trsvcid": "4420" 00:17:02.881 }, 00:17:02.881 "peer_address": { 00:17:02.881 "trtype": "TCP", 00:17:02.881 "adrfam": "IPv4", 00:17:02.881 "traddr": "10.0.0.1", 00:17:02.881 "trsvcid": "34974" 00:17:02.881 }, 00:17:02.881 "auth": { 00:17:02.881 "state": "completed", 00:17:02.881 "digest": "sha384", 00:17:02.881 "dhgroup": "null" 00:17:02.881 } 00:17:02.881 } 00:17:02.881 ]' 00:17:02.881 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.881 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.881 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.882 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.882 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.882 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.882 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.882 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.142 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:03.142 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.084 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.345 00:17:04.345 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.345 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.345 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.345 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.345 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.345 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.345 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.606 { 00:17:04.606 "cntlid": 51, 00:17:04.606 "qid": 0, 00:17:04.606 "state": "enabled", 00:17:04.606 "thread": "nvmf_tgt_poll_group_000", 00:17:04.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.606 "listen_address": { 00:17:04.606 "trtype": "TCP", 00:17:04.606 "adrfam": "IPv4", 00:17:04.606 "traddr": "10.0.0.2", 00:17:04.606 "trsvcid": "4420" 00:17:04.606 }, 00:17:04.606 "peer_address": { 00:17:04.606 "trtype": "TCP", 00:17:04.606 "adrfam": "IPv4", 00:17:04.606 "traddr": "10.0.0.1", 00:17:04.606 "trsvcid": "35012" 00:17:04.606 }, 00:17:04.606 "auth": { 00:17:04.606 "state": "completed", 00:17:04.606 "digest": "sha384", 00:17:04.606 "dhgroup": "null" 00:17:04.606 } 00:17:04.606 } 00:17:04.606 ]' 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.606 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.866 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:04.866 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:05.438 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.699 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.699 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.699 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.699 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.699 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.699 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.699 13:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.699 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.960 00:17:05.960 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.960 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.960 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.221 { 00:17:06.221 "cntlid": 53, 00:17:06.221 "qid": 0, 00:17:06.221 "state": "enabled", 00:17:06.221 "thread": "nvmf_tgt_poll_group_000", 00:17:06.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.221 "listen_address": { 00:17:06.221 "trtype": "TCP", 00:17:06.221 "adrfam": "IPv4", 00:17:06.221 "traddr": "10.0.0.2", 00:17:06.221 "trsvcid": "4420" 00:17:06.221 }, 00:17:06.221 "peer_address": { 00:17:06.221 "trtype": "TCP", 00:17:06.221 "adrfam": "IPv4", 00:17:06.221 "traddr": "10.0.0.1", 00:17:06.221 "trsvcid": "57354" 00:17:06.221 }, 00:17:06.221 "auth": { 00:17:06.221 "state": "completed", 00:17:06.221 "digest": "sha384", 00:17:06.221 "dhgroup": "null" 00:17:06.221 } 00:17:06.221 } 00:17:06.221 ]' 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.221 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.482 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:06.482 13:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.422 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.682 00:17:07.682 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.682 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.682 13:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.942 { 00:17:07.942 "cntlid": 55, 00:17:07.942 "qid": 0, 00:17:07.942 "state": "enabled", 00:17:07.942 "thread": "nvmf_tgt_poll_group_000", 00:17:07.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.942 "listen_address": { 00:17:07.942 "trtype": "TCP", 00:17:07.942 "adrfam": "IPv4", 00:17:07.942 "traddr": "10.0.0.2", 00:17:07.942 "trsvcid": "4420" 00:17:07.942 }, 00:17:07.942 "peer_address": { 00:17:07.942 "trtype": "TCP", 00:17:07.942 "adrfam": "IPv4", 00:17:07.942 "traddr": "10.0.0.1", 00:17:07.942 "trsvcid": "57386" 00:17:07.942 }, 00:17:07.942 "auth": { 00:17:07.942 "state": "completed", 00:17:07.942 "digest": "sha384", 00:17:07.942 "dhgroup": "null" 00:17:07.942 } 00:17:07.942 } 00:17:07.942 ]' 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.942 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.202 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:08.202 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:09.143 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.144 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.404 00:17:09.404 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.404 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.404 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.664 { 00:17:09.664 "cntlid": 57, 00:17:09.664 "qid": 0, 00:17:09.664 "state": "enabled", 00:17:09.664 "thread": "nvmf_tgt_poll_group_000", 00:17:09.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:09.664 "listen_address": { 00:17:09.664 "trtype": "TCP", 00:17:09.664 "adrfam": "IPv4", 00:17:09.664 "traddr": "10.0.0.2", 00:17:09.664 "trsvcid": "4420" 00:17:09.664 }, 00:17:09.664 "peer_address": { 00:17:09.664 "trtype": "TCP", 00:17:09.664 "adrfam": "IPv4", 00:17:09.664 "traddr": "10.0.0.1", 00:17:09.664 "trsvcid": "57410" 00:17:09.664 }, 00:17:09.664 "auth": { 00:17:09.664 "state": "completed", 00:17:09.664 "digest": "sha384", 00:17:09.664 "dhgroup": "ffdhe2048" 00:17:09.664 } 00:17:09.664 } 00:17:09.664 ]' 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.664 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.664 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.664 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.664 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.925 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:09.925 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:10.866 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.866 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.866 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.866 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.866 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.866 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.866 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.866 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.866 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.867 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.867 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.127 00:17:11.127 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.127 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.127 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.386 { 00:17:11.386 "cntlid": 59, 00:17:11.386 "qid": 0, 00:17:11.386 "state": "enabled", 00:17:11.386 "thread": "nvmf_tgt_poll_group_000", 00:17:11.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.386 "listen_address": { 00:17:11.386 "trtype": "TCP", 00:17:11.386 "adrfam": "IPv4", 00:17:11.386 "traddr": "10.0.0.2", 00:17:11.386 "trsvcid": "4420" 00:17:11.386 }, 00:17:11.386 "peer_address": { 00:17:11.386 "trtype": "TCP", 00:17:11.386 "adrfam": "IPv4", 00:17:11.386 "traddr": "10.0.0.1", 00:17:11.386 "trsvcid": "57440" 00:17:11.386 }, 00:17:11.386 "auth": { 00:17:11.386 "state": "completed", 00:17:11.386 "digest": "sha384", 00:17:11.386 "dhgroup": "ffdhe2048" 00:17:11.386 } 00:17:11.386 } 00:17:11.386 ]' 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.386 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.647 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:11.647 13:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.589 13:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.850 00:17:12.850 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.850 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.850 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.112 { 00:17:13.112 "cntlid": 61, 00:17:13.112 "qid": 0, 00:17:13.112 "state": "enabled", 00:17:13.112 "thread": "nvmf_tgt_poll_group_000", 00:17:13.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.112 "listen_address": { 00:17:13.112 "trtype": "TCP", 00:17:13.112 "adrfam": "IPv4", 00:17:13.112 "traddr": "10.0.0.2", 00:17:13.112 "trsvcid": "4420" 00:17:13.112 }, 00:17:13.112 "peer_address": { 00:17:13.112 "trtype": "TCP", 00:17:13.112 "adrfam": "IPv4", 00:17:13.112 "traddr": "10.0.0.1", 00:17:13.112 "trsvcid": "57458" 00:17:13.112 }, 00:17:13.112 "auth": { 00:17:13.112 "state": "completed", 00:17:13.112 "digest": "sha384", 00:17:13.112 "dhgroup": "ffdhe2048" 00:17:13.112 } 00:17:13.112 } 00:17:13.112 ]' 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.112 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.372 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:13.373 13:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.314 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.574 00:17:14.574 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.574 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.574 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.835 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.836 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.836 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.836 13:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.836 { 00:17:14.836 "cntlid": 63, 00:17:14.836 "qid": 0, 00:17:14.836 "state": "enabled", 00:17:14.836 "thread": "nvmf_tgt_poll_group_000", 00:17:14.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.836 "listen_address": { 00:17:14.836 "trtype": "TCP", 00:17:14.836 "adrfam": "IPv4", 00:17:14.836 "traddr": "10.0.0.2", 00:17:14.836 "trsvcid": "4420" 00:17:14.836 }, 00:17:14.836 "peer_address": { 00:17:14.836 "trtype": "TCP", 00:17:14.836 "adrfam": "IPv4", 00:17:14.836 "traddr": "10.0.0.1", 00:17:14.836 "trsvcid": "57492" 00:17:14.836 }, 00:17:14.836 "auth": { 00:17:14.836 "state": "completed", 00:17:14.836 "digest": "sha384", 00:17:14.836 "dhgroup": "ffdhe2048" 00:17:14.836 } 00:17:14.836 } 00:17:14.836 ]' 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.836 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.096 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:15.096 13:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.037 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.298 00:17:16.298 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.298 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.298 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.559 { 00:17:16.559 "cntlid": 65, 00:17:16.559 "qid": 0, 00:17:16.559 "state": "enabled", 00:17:16.559 "thread": "nvmf_tgt_poll_group_000", 00:17:16.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.559 "listen_address": { 00:17:16.559 "trtype": "TCP", 00:17:16.559 "adrfam": "IPv4", 00:17:16.559 "traddr": "10.0.0.2", 00:17:16.559 "trsvcid": "4420" 00:17:16.559 }, 00:17:16.559 "peer_address": { 00:17:16.559 "trtype": "TCP", 00:17:16.559 "adrfam": "IPv4", 00:17:16.559 "traddr": "10.0.0.1", 00:17:16.559 "trsvcid": "53196" 00:17:16.559 }, 00:17:16.559 "auth": { 00:17:16.559 "state": "completed", 00:17:16.559 "digest": "sha384", 00:17:16.559 "dhgroup": "ffdhe3072" 00:17:16.559 } 00:17:16.559 } 00:17:16.559 ]' 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.559 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.819 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:16.820 13:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:17.391 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.391 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.391 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.391 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.391 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.391 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.391 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.391 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.651 13:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.914 00:17:17.914 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.914 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.914 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.175 { 00:17:18.175 "cntlid": 67, 00:17:18.175 "qid": 0, 00:17:18.175 "state": "enabled", 00:17:18.175 "thread": "nvmf_tgt_poll_group_000", 00:17:18.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.175 "listen_address": { 00:17:18.175 "trtype": "TCP", 00:17:18.175 "adrfam": "IPv4", 00:17:18.175 "traddr": "10.0.0.2", 00:17:18.175 "trsvcid": "4420" 00:17:18.175 }, 00:17:18.175 "peer_address": { 00:17:18.175 "trtype": "TCP", 00:17:18.175 "adrfam": "IPv4", 00:17:18.175 "traddr": "10.0.0.1", 00:17:18.175 "trsvcid": "53228" 00:17:18.175 }, 00:17:18.175 "auth": { 00:17:18.175 "state": "completed", 00:17:18.175 "digest": "sha384", 00:17:18.175 "dhgroup": "ffdhe3072" 00:17:18.175 } 00:17:18.175 } 00:17:18.175 ]' 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.175 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.435 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:18.435 13:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:19.007 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.007 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.007 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.007 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.273 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.534 00:17:19.534 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.534 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.534 13:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.795 { 00:17:19.795 "cntlid": 69, 00:17:19.795 "qid": 0, 00:17:19.795 "state": "enabled", 00:17:19.795 "thread": "nvmf_tgt_poll_group_000", 00:17:19.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.795 "listen_address": { 00:17:19.795 "trtype": "TCP", 00:17:19.795 "adrfam": "IPv4", 00:17:19.795 "traddr": "10.0.0.2", 00:17:19.795 "trsvcid": "4420" 00:17:19.795 }, 00:17:19.795 "peer_address": { 00:17:19.795 "trtype": "TCP", 00:17:19.795 "adrfam": "IPv4", 00:17:19.795 "traddr": "10.0.0.1", 00:17:19.795 "trsvcid": "53258" 00:17:19.795 }, 00:17:19.795 "auth": { 00:17:19.795 "state": "completed", 00:17:19.795 "digest": "sha384", 00:17:19.795 "dhgroup": "ffdhe3072" 00:17:19.795 } 00:17:19.795 } 00:17:19.795 ]' 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.795 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.056 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:20.056 13:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.999 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.261 00:17:21.261 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.261 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.261 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.261 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.522 { 00:17:21.522 "cntlid": 71, 00:17:21.522 "qid": 0, 00:17:21.522 "state": "enabled", 00:17:21.522 "thread": "nvmf_tgt_poll_group_000", 00:17:21.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.522 "listen_address": { 00:17:21.522 "trtype": "TCP", 00:17:21.522 "adrfam": "IPv4", 00:17:21.522 "traddr": "10.0.0.2", 00:17:21.522 "trsvcid": "4420" 00:17:21.522 }, 00:17:21.522 "peer_address": { 00:17:21.522 "trtype": "TCP", 00:17:21.522 "adrfam": "IPv4", 00:17:21.522 "traddr": "10.0.0.1", 00:17:21.522 "trsvcid": "53286" 00:17:21.522 }, 00:17:21.522 "auth": { 00:17:21.522 "state": "completed", 00:17:21.522 "digest": "sha384", 00:17:21.522 "dhgroup": "ffdhe3072" 00:17:21.522 } 00:17:21.522 } 00:17:21.522 ]' 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.522 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.782 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:21.782 13:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:22.352 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.612 13:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.872 00:17:22.872 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.872 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.872 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.130 { 00:17:23.130 "cntlid": 73, 00:17:23.130 "qid": 0, 00:17:23.130 "state": "enabled", 00:17:23.130 "thread": "nvmf_tgt_poll_group_000", 00:17:23.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.130 "listen_address": { 00:17:23.130 "trtype": "TCP", 00:17:23.130 "adrfam": "IPv4", 00:17:23.130 "traddr": "10.0.0.2", 00:17:23.130 "trsvcid": "4420" 00:17:23.130 }, 00:17:23.130 "peer_address": { 00:17:23.130 "trtype": "TCP", 00:17:23.130 "adrfam": "IPv4", 00:17:23.130 "traddr": "10.0.0.1", 00:17:23.130 "trsvcid": "53302" 00:17:23.130 }, 00:17:23.130 "auth": { 00:17:23.130 "state": "completed", 00:17:23.130 "digest": "sha384", 00:17:23.130 "dhgroup": "ffdhe4096" 00:17:23.130 } 00:17:23.130 } 00:17:23.130 ]' 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.130 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.390 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.390 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.390 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.390 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:23.390 13:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.334 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.595 00:17:24.595 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.595 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.595 13:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.855 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.855 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.855 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.855 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.855 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.855 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.855 { 00:17:24.855 "cntlid": 75, 00:17:24.855 "qid": 0, 00:17:24.855 "state": "enabled", 00:17:24.855 "thread": "nvmf_tgt_poll_group_000", 00:17:24.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.855 "listen_address": { 00:17:24.855 "trtype": "TCP", 00:17:24.855 "adrfam": "IPv4", 00:17:24.855 "traddr": "10.0.0.2", 00:17:24.855 "trsvcid": "4420" 00:17:24.855 }, 00:17:24.855 "peer_address": { 00:17:24.855 "trtype": "TCP", 00:17:24.855 "adrfam": "IPv4", 00:17:24.855 "traddr": "10.0.0.1", 00:17:24.855 "trsvcid": "53330" 00:17:24.855 }, 00:17:24.855 "auth": { 00:17:24.855 "state": "completed", 00:17:24.855 "digest": "sha384", 00:17:24.855 "dhgroup": "ffdhe4096" 00:17:24.855 } 00:17:24.855 } 00:17:24.855 ]' 00:17:24.855 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.855 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.855 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.116 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.116 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.116 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.116 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.116 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.117 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:25.117 13:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.059 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.319 00:17:26.319 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.319 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.319 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.579 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.579 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.579 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.579 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.579 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.579 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.579 { 00:17:26.579 "cntlid": 77, 00:17:26.579 "qid": 0, 00:17:26.579 "state": "enabled", 00:17:26.579 "thread": "nvmf_tgt_poll_group_000", 00:17:26.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.579 "listen_address": { 00:17:26.579 "trtype": "TCP", 00:17:26.579 "adrfam": "IPv4", 00:17:26.579 "traddr": "10.0.0.2", 00:17:26.579 "trsvcid": "4420" 00:17:26.579 }, 00:17:26.579 "peer_address": { 00:17:26.579 "trtype": "TCP", 00:17:26.579 "adrfam": "IPv4", 00:17:26.579 "traddr": "10.0.0.1", 00:17:26.579 "trsvcid": "53150" 00:17:26.579 }, 00:17:26.579 "auth": { 00:17:26.579 "state": "completed", 00:17:26.579 "digest": "sha384", 00:17:26.579 "dhgroup": "ffdhe4096" 00:17:26.579 } 00:17:26.579 } 00:17:26.579 ]' 00:17:26.579 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.579 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.579 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.840 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.840 13:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.840 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.840 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.840 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.840 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:26.840 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:27.784 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.784 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.784 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.784 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.784 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.784 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.784 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.784 13:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.784 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.044 00:17:28.044 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.044 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.044 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.304 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.304 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.304 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.304 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.304 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.304 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.304 { 00:17:28.304 "cntlid": 79, 00:17:28.304 "qid": 0, 00:17:28.304 "state": "enabled", 00:17:28.305 "thread": "nvmf_tgt_poll_group_000", 00:17:28.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.305 "listen_address": { 00:17:28.305 "trtype": "TCP", 00:17:28.305 "adrfam": "IPv4", 00:17:28.305 "traddr": "10.0.0.2", 00:17:28.305 "trsvcid": "4420" 00:17:28.305 }, 00:17:28.305 "peer_address": { 00:17:28.305 "trtype": "TCP", 00:17:28.305 "adrfam": "IPv4", 00:17:28.305 "traddr": "10.0.0.1", 00:17:28.305 "trsvcid": "53182" 00:17:28.305 }, 00:17:28.305 "auth": { 00:17:28.305 "state": "completed", 00:17:28.305 "digest": "sha384", 00:17:28.305 "dhgroup": "ffdhe4096" 00:17:28.305 } 00:17:28.305 } 00:17:28.305 ]' 00:17:28.305 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.305 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.305 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.564 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.564 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.564 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.564 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.564 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.564 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:28.564 13:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.506 13:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.078 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.078 { 00:17:30.078 "cntlid": 81, 00:17:30.078 "qid": 0, 00:17:30.078 "state": "enabled", 00:17:30.078 "thread": "nvmf_tgt_poll_group_000", 00:17:30.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.078 "listen_address": { 00:17:30.078 "trtype": "TCP", 00:17:30.078 "adrfam": "IPv4", 00:17:30.078 "traddr": "10.0.0.2", 00:17:30.078 "trsvcid": "4420" 00:17:30.078 }, 00:17:30.078 "peer_address": { 00:17:30.078 "trtype": "TCP", 00:17:30.078 "adrfam": "IPv4", 00:17:30.078 "traddr": "10.0.0.1", 00:17:30.078 "trsvcid": "53212" 00:17:30.078 }, 00:17:30.078 "auth": { 00:17:30.078 "state": "completed", 00:17:30.078 "digest": "sha384", 00:17:30.078 "dhgroup": "ffdhe6144" 00:17:30.078 } 00:17:30.078 } 00:17:30.078 ]' 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.078 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.339 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.339 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.339 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.339 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.339 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.339 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:30.339 13:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:31.282 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.282 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.282 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.282 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.282 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.282 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.282 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.282 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.544 13:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.805 00:17:31.805 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.805 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.806 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.067 { 00:17:32.067 "cntlid": 83, 00:17:32.067 "qid": 0, 00:17:32.067 "state": "enabled", 00:17:32.067 "thread": "nvmf_tgt_poll_group_000", 00:17:32.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.067 "listen_address": { 00:17:32.067 "trtype": "TCP", 00:17:32.067 "adrfam": "IPv4", 00:17:32.067 "traddr": "10.0.0.2", 00:17:32.067 "trsvcid": "4420" 00:17:32.067 }, 00:17:32.067 "peer_address": { 00:17:32.067 "trtype": "TCP", 00:17:32.067 "adrfam": "IPv4", 00:17:32.067 "traddr": "10.0.0.1", 00:17:32.067 "trsvcid": "53224" 00:17:32.067 }, 00:17:32.067 "auth": { 00:17:32.067 "state": "completed", 00:17:32.067 "digest": "sha384", 00:17:32.067 "dhgroup": "ffdhe6144" 00:17:32.067 } 00:17:32.067 } 00:17:32.067 ]' 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.067 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.328 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:32.328 13:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.271 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.531 00:17:33.531 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.531 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.531 13:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.792 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.792 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.792 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.792 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.792 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.792 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.792 { 00:17:33.792 "cntlid": 85, 00:17:33.792 "qid": 0, 00:17:33.792 "state": "enabled", 00:17:33.792 "thread": "nvmf_tgt_poll_group_000", 00:17:33.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.792 "listen_address": { 00:17:33.792 "trtype": "TCP", 00:17:33.792 "adrfam": "IPv4", 00:17:33.792 "traddr": "10.0.0.2", 00:17:33.792 "trsvcid": "4420" 00:17:33.792 }, 00:17:33.792 "peer_address": { 00:17:33.792 "trtype": "TCP", 00:17:33.792 "adrfam": "IPv4", 00:17:33.792 "traddr": "10.0.0.1", 00:17:33.792 "trsvcid": "53252" 00:17:33.792 }, 00:17:33.792 "auth": { 00:17:33.792 "state": "completed", 00:17:33.792 "digest": "sha384", 00:17:33.792 "dhgroup": "ffdhe6144" 00:17:33.792 } 00:17:33.792 } 00:17:33.792 ]' 00:17:33.792 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.792 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.792 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.053 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.053 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.053 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.053 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.053 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.053 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:34.053 13:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:34.995 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.995 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.995 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.995 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.995 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.995 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.995 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.995 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.256 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.517 00:17:35.517 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.517 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.517 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.779 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.779 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.779 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.779 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.779 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.779 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.779 { 00:17:35.779 "cntlid": 87, 00:17:35.779 "qid": 0, 00:17:35.779 "state": "enabled", 00:17:35.779 "thread": "nvmf_tgt_poll_group_000", 00:17:35.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.779 "listen_address": { 00:17:35.779 "trtype": "TCP", 00:17:35.779 "adrfam": "IPv4", 00:17:35.779 "traddr": "10.0.0.2", 00:17:35.779 "trsvcid": "4420" 00:17:35.779 }, 00:17:35.779 "peer_address": { 00:17:35.779 "trtype": "TCP", 00:17:35.779 "adrfam": "IPv4", 00:17:35.779 "traddr": "10.0.0.1", 00:17:35.779 "trsvcid": "51870" 00:17:35.779 }, 00:17:35.779 "auth": { 00:17:35.779 "state": "completed", 00:17:35.779 "digest": "sha384", 00:17:35.779 "dhgroup": "ffdhe6144" 00:17:35.779 } 00:17:35.779 } 00:17:35.779 ]' 00:17:35.779 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.779 13:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.779 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.779 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.779 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.779 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.779 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.779 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.039 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:36.039 13:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.981 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.982 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.553 00:17:37.553 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.553 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.553 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.813 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.813 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.813 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.813 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.813 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.813 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.813 { 00:17:37.813 "cntlid": 89, 00:17:37.813 "qid": 0, 00:17:37.813 "state": "enabled", 00:17:37.813 "thread": "nvmf_tgt_poll_group_000", 00:17:37.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.813 "listen_address": { 00:17:37.813 "trtype": "TCP", 00:17:37.813 "adrfam": "IPv4", 00:17:37.813 "traddr": "10.0.0.2", 00:17:37.813 "trsvcid": "4420" 00:17:37.813 }, 00:17:37.813 "peer_address": { 00:17:37.813 "trtype": "TCP", 00:17:37.813 "adrfam": "IPv4", 00:17:37.813 "traddr": "10.0.0.1", 00:17:37.813 "trsvcid": "51892" 00:17:37.813 }, 00:17:37.813 "auth": { 00:17:37.813 "state": "completed", 00:17:37.813 "digest": "sha384", 00:17:37.813 "dhgroup": "ffdhe8192" 00:17:37.813 } 00:17:37.813 } 00:17:37.813 ]' 00:17:37.813 13:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.813 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.813 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.813 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.813 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.813 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.813 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.813 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.073 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:38.073 13:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.020 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.590 00:17:39.590 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.590 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.590 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.850 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.851 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.851 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.851 13:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.851 { 00:17:39.851 "cntlid": 91, 00:17:39.851 "qid": 0, 00:17:39.851 "state": "enabled", 00:17:39.851 "thread": "nvmf_tgt_poll_group_000", 00:17:39.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.851 "listen_address": { 00:17:39.851 "trtype": "TCP", 00:17:39.851 "adrfam": "IPv4", 00:17:39.851 "traddr": "10.0.0.2", 00:17:39.851 "trsvcid": "4420" 00:17:39.851 }, 00:17:39.851 "peer_address": { 00:17:39.851 "trtype": "TCP", 00:17:39.851 "adrfam": "IPv4", 00:17:39.851 "traddr": "10.0.0.1", 00:17:39.851 "trsvcid": "51920" 00:17:39.851 }, 00:17:39.851 "auth": { 00:17:39.851 "state": "completed", 00:17:39.851 "digest": "sha384", 00:17:39.851 "dhgroup": "ffdhe8192" 00:17:39.851 } 00:17:39.851 } 00:17:39.851 ]' 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.851 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.111 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:40.111 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:40.681 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.941 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.941 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.941 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.941 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.941 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.941 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.942 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.513 00:17:41.513 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.513 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.513 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.774 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.774 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.774 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.774 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.774 { 00:17:41.774 "cntlid": 93, 00:17:41.774 "qid": 0, 00:17:41.774 "state": "enabled", 00:17:41.774 "thread": "nvmf_tgt_poll_group_000", 00:17:41.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.774 "listen_address": { 00:17:41.774 "trtype": "TCP", 00:17:41.774 "adrfam": "IPv4", 00:17:41.774 "traddr": "10.0.0.2", 00:17:41.774 "trsvcid": "4420" 00:17:41.774 }, 00:17:41.774 "peer_address": { 00:17:41.774 "trtype": "TCP", 00:17:41.774 "adrfam": "IPv4", 00:17:41.774 "traddr": "10.0.0.1", 00:17:41.774 "trsvcid": "51946" 00:17:41.774 }, 00:17:41.774 "auth": { 00:17:41.774 "state": "completed", 00:17:41.774 "digest": "sha384", 00:17:41.774 "dhgroup": "ffdhe8192" 00:17:41.774 } 00:17:41.774 } 00:17:41.774 ]' 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.774 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.035 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:42.035 13:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.978 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.550 00:17:43.550 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.550 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.550 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.811 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.811 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.811 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.811 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.811 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.811 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.811 { 00:17:43.811 "cntlid": 95, 00:17:43.811 "qid": 0, 00:17:43.811 "state": "enabled", 00:17:43.811 "thread": "nvmf_tgt_poll_group_000", 00:17:43.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.811 "listen_address": { 00:17:43.811 "trtype": "TCP", 00:17:43.811 "adrfam": "IPv4", 00:17:43.811 "traddr": "10.0.0.2", 00:17:43.811 "trsvcid": "4420" 00:17:43.811 }, 00:17:43.811 "peer_address": { 00:17:43.811 "trtype": "TCP", 00:17:43.811 "adrfam": "IPv4", 00:17:43.811 "traddr": "10.0.0.1", 00:17:43.811 "trsvcid": "51962" 00:17:43.811 }, 00:17:43.811 "auth": { 00:17:43.811 "state": "completed", 00:17:43.811 "digest": "sha384", 00:17:43.811 "dhgroup": "ffdhe8192" 00:17:43.811 } 00:17:43.811 } 00:17:43.811 ]' 00:17:43.811 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.811 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.811 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.811 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.811 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.811 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.812 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.812 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.072 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:44.072 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.015 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.277 00:17:45.277 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.277 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.277 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.277 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.277 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.277 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.277 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.538 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.538 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.539 { 00:17:45.539 "cntlid": 97, 00:17:45.539 "qid": 0, 00:17:45.539 "state": "enabled", 00:17:45.539 "thread": "nvmf_tgt_poll_group_000", 00:17:45.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.539 "listen_address": { 00:17:45.539 "trtype": "TCP", 00:17:45.539 "adrfam": "IPv4", 00:17:45.539 "traddr": "10.0.0.2", 00:17:45.539 "trsvcid": "4420" 00:17:45.539 }, 00:17:45.539 "peer_address": { 00:17:45.539 "trtype": "TCP", 00:17:45.539 "adrfam": "IPv4", 00:17:45.539 "traddr": "10.0.0.1", 00:17:45.539 "trsvcid": "51558" 00:17:45.539 }, 00:17:45.539 "auth": { 00:17:45.539 "state": "completed", 00:17:45.539 "digest": "sha512", 00:17:45.539 "dhgroup": "null" 00:17:45.539 } 00:17:45.539 } 00:17:45.539 ]' 00:17:45.539 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.539 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.539 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.539 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:45.539 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.539 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.539 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.539 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.799 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:45.799 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:46.371 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.371 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.371 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.371 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.632 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.892 00:17:46.892 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.892 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.892 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.152 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.152 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.152 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.152 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.152 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.152 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.152 { 00:17:47.152 "cntlid": 99, 00:17:47.152 "qid": 0, 00:17:47.152 "state": "enabled", 00:17:47.153 "thread": "nvmf_tgt_poll_group_000", 00:17:47.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.153 "listen_address": { 00:17:47.153 "trtype": "TCP", 00:17:47.153 "adrfam": "IPv4", 00:17:47.153 "traddr": "10.0.0.2", 00:17:47.153 "trsvcid": "4420" 00:17:47.153 }, 00:17:47.153 "peer_address": { 00:17:47.153 "trtype": "TCP", 00:17:47.153 "adrfam": "IPv4", 00:17:47.153 "traddr": "10.0.0.1", 00:17:47.153 "trsvcid": "51600" 00:17:47.153 }, 00:17:47.153 "auth": { 00:17:47.153 "state": "completed", 00:17:47.153 "digest": "sha512", 00:17:47.153 "dhgroup": "null" 00:17:47.153 } 00:17:47.153 } 00:17:47.153 ]' 00:17:47.153 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.153 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.153 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.153 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:47.153 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.153 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.153 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.153 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.412 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:47.412 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.353 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.614 00:17:48.614 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.614 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.614 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.876 { 00:17:48.876 "cntlid": 101, 00:17:48.876 "qid": 0, 00:17:48.876 "state": "enabled", 00:17:48.876 "thread": "nvmf_tgt_poll_group_000", 00:17:48.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.876 "listen_address": { 00:17:48.876 "trtype": "TCP", 00:17:48.876 "adrfam": "IPv4", 00:17:48.876 "traddr": "10.0.0.2", 00:17:48.876 "trsvcid": "4420" 00:17:48.876 }, 00:17:48.876 "peer_address": { 00:17:48.876 "trtype": "TCP", 00:17:48.876 "adrfam": "IPv4", 00:17:48.876 "traddr": "10.0.0.1", 00:17:48.876 "trsvcid": "51624" 00:17:48.876 }, 00:17:48.876 "auth": { 00:17:48.876 "state": "completed", 00:17:48.876 "digest": "sha512", 00:17:48.876 "dhgroup": "null" 00:17:48.876 } 00:17:48.876 } 00:17:48.876 ]' 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.876 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.137 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:49.137 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:49.709 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.709 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.709 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.709 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.709 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.709 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.709 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.709 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.970 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.971 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.232 00:17:50.232 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.232 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.232 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.493 { 00:17:50.493 "cntlid": 103, 00:17:50.493 "qid": 0, 00:17:50.493 "state": "enabled", 00:17:50.493 "thread": "nvmf_tgt_poll_group_000", 00:17:50.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.493 "listen_address": { 00:17:50.493 "trtype": "TCP", 00:17:50.493 "adrfam": "IPv4", 00:17:50.493 "traddr": "10.0.0.2", 00:17:50.493 "trsvcid": "4420" 00:17:50.493 }, 00:17:50.493 "peer_address": { 00:17:50.493 "trtype": "TCP", 00:17:50.493 "adrfam": "IPv4", 00:17:50.493 "traddr": "10.0.0.1", 00:17:50.493 "trsvcid": "51650" 00:17:50.493 }, 00:17:50.493 "auth": { 00:17:50.493 "state": "completed", 00:17:50.493 "digest": "sha512", 00:17:50.493 "dhgroup": "null" 00:17:50.493 } 00:17:50.493 } 00:17:50.493 ]' 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.493 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.754 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:50.754 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.695 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.956 00:17:51.956 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.956 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.956 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.216 { 00:17:52.216 "cntlid": 105, 00:17:52.216 "qid": 0, 00:17:52.216 "state": "enabled", 00:17:52.216 "thread": "nvmf_tgt_poll_group_000", 00:17:52.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.216 "listen_address": { 00:17:52.216 "trtype": "TCP", 00:17:52.216 "adrfam": "IPv4", 00:17:52.216 "traddr": "10.0.0.2", 00:17:52.216 "trsvcid": "4420" 00:17:52.216 }, 00:17:52.216 "peer_address": { 00:17:52.216 "trtype": "TCP", 00:17:52.216 "adrfam": "IPv4", 00:17:52.216 "traddr": "10.0.0.1", 00:17:52.216 "trsvcid": "51696" 00:17:52.216 }, 00:17:52.216 "auth": { 00:17:52.216 "state": "completed", 00:17:52.216 "digest": "sha512", 00:17:52.216 "dhgroup": "ffdhe2048" 00:17:52.216 } 00:17:52.216 } 00:17:52.216 ]' 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.216 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.217 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.477 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:52.477 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.418 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.419 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.419 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.679 00:17:53.679 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.679 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.679 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.940 { 00:17:53.940 "cntlid": 107, 00:17:53.940 "qid": 0, 00:17:53.940 "state": "enabled", 00:17:53.940 "thread": "nvmf_tgt_poll_group_000", 00:17:53.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.940 "listen_address": { 00:17:53.940 "trtype": "TCP", 00:17:53.940 "adrfam": "IPv4", 00:17:53.940 "traddr": "10.0.0.2", 00:17:53.940 "trsvcid": "4420" 00:17:53.940 }, 00:17:53.940 "peer_address": { 00:17:53.940 "trtype": "TCP", 00:17:53.940 "adrfam": "IPv4", 00:17:53.940 "traddr": "10.0.0.1", 00:17:53.940 "trsvcid": "51718" 00:17:53.940 }, 00:17:53.940 "auth": { 00:17:53.940 "state": "completed", 00:17:53.940 "digest": "sha512", 00:17:53.940 "dhgroup": "ffdhe2048" 00:17:53.940 } 00:17:53.940 } 00:17:53.940 ]' 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.940 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.201 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:54.201 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.141 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.142 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.142 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.402 00:17:55.402 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.402 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.402 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.662 { 00:17:55.662 "cntlid": 109, 00:17:55.662 "qid": 0, 00:17:55.662 "state": "enabled", 00:17:55.662 "thread": "nvmf_tgt_poll_group_000", 00:17:55.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.662 "listen_address": { 00:17:55.662 "trtype": "TCP", 00:17:55.662 "adrfam": "IPv4", 00:17:55.662 "traddr": "10.0.0.2", 00:17:55.662 "trsvcid": "4420" 00:17:55.662 }, 00:17:55.662 "peer_address": { 00:17:55.662 "trtype": "TCP", 00:17:55.662 "adrfam": "IPv4", 00:17:55.662 "traddr": "10.0.0.1", 00:17:55.662 "trsvcid": "40836" 00:17:55.662 }, 00:17:55.662 "auth": { 00:17:55.662 "state": "completed", 00:17:55.662 "digest": "sha512", 00:17:55.662 "dhgroup": "ffdhe2048" 00:17:55.662 } 00:17:55.662 } 00:17:55.662 ]' 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.662 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.922 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:55.922 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:17:56.491 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.762 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.762 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.762 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.762 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.762 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.762 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.762 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.762 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.023 00:17:57.023 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.023 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.023 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.283 { 00:17:57.283 "cntlid": 111, 00:17:57.283 "qid": 0, 00:17:57.283 "state": "enabled", 00:17:57.283 "thread": "nvmf_tgt_poll_group_000", 00:17:57.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.283 "listen_address": { 00:17:57.283 "trtype": "TCP", 00:17:57.283 "adrfam": "IPv4", 00:17:57.283 "traddr": "10.0.0.2", 00:17:57.283 "trsvcid": "4420" 00:17:57.283 }, 00:17:57.283 "peer_address": { 00:17:57.283 "trtype": "TCP", 00:17:57.283 "adrfam": "IPv4", 00:17:57.283 "traddr": "10.0.0.1", 00:17:57.283 "trsvcid": "40852" 00:17:57.283 }, 00:17:57.283 "auth": { 00:17:57.283 "state": "completed", 00:17:57.283 "digest": "sha512", 00:17:57.283 "dhgroup": "ffdhe2048" 00:17:57.283 } 00:17:57.283 } 00:17:57.283 ]' 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.283 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.544 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:57.544 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.485 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.745 00:17:58.746 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.746 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.746 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.006 { 00:17:59.006 "cntlid": 113, 00:17:59.006 "qid": 0, 00:17:59.006 "state": "enabled", 00:17:59.006 "thread": "nvmf_tgt_poll_group_000", 00:17:59.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.006 "listen_address": { 00:17:59.006 "trtype": "TCP", 00:17:59.006 "adrfam": "IPv4", 00:17:59.006 "traddr": "10.0.0.2", 00:17:59.006 "trsvcid": "4420" 00:17:59.006 }, 00:17:59.006 "peer_address": { 00:17:59.006 "trtype": "TCP", 00:17:59.006 "adrfam": "IPv4", 00:17:59.006 "traddr": "10.0.0.1", 00:17:59.006 "trsvcid": "40878" 00:17:59.006 }, 00:17:59.006 "auth": { 00:17:59.006 "state": "completed", 00:17:59.006 "digest": "sha512", 00:17:59.006 "dhgroup": "ffdhe3072" 00:17:59.006 } 00:17:59.006 } 00:17:59.006 ]' 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.006 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.266 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:17:59.266 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.207 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.468 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.468 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.468 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.468 00:18:00.468 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.468 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.468 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.728 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.728 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.728 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.728 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.728 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.728 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.728 { 00:18:00.728 "cntlid": 115, 00:18:00.728 "qid": 0, 00:18:00.728 "state": "enabled", 00:18:00.728 "thread": "nvmf_tgt_poll_group_000", 00:18:00.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.728 "listen_address": { 00:18:00.728 "trtype": "TCP", 00:18:00.728 "adrfam": "IPv4", 00:18:00.728 "traddr": "10.0.0.2", 00:18:00.728 "trsvcid": "4420" 00:18:00.729 }, 00:18:00.729 "peer_address": { 00:18:00.729 "trtype": "TCP", 00:18:00.729 "adrfam": "IPv4", 00:18:00.729 "traddr": "10.0.0.1", 00:18:00.729 "trsvcid": "40892" 00:18:00.729 }, 00:18:00.729 "auth": { 00:18:00.729 "state": "completed", 00:18:00.729 "digest": "sha512", 00:18:00.729 "dhgroup": "ffdhe3072" 00:18:00.729 } 00:18:00.729 } 00:18:00.729 ]' 00:18:00.729 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.729 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.729 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.989 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.989 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.989 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.989 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.989 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.989 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:18:00.989 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.075 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.383 00:18:02.383 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.383 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.383 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.383 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.383 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.383 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.383 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.383 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.679 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.679 { 00:18:02.679 "cntlid": 117, 00:18:02.679 "qid": 0, 00:18:02.679 "state": "enabled", 00:18:02.679 "thread": "nvmf_tgt_poll_group_000", 00:18:02.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.679 "listen_address": { 00:18:02.679 "trtype": "TCP", 00:18:02.679 "adrfam": "IPv4", 00:18:02.679 "traddr": "10.0.0.2", 00:18:02.679 "trsvcid": "4420" 00:18:02.679 }, 00:18:02.679 "peer_address": { 00:18:02.679 "trtype": "TCP", 00:18:02.679 "adrfam": "IPv4", 00:18:02.679 "traddr": "10.0.0.1", 00:18:02.679 "trsvcid": "40932" 00:18:02.679 }, 00:18:02.679 "auth": { 00:18:02.679 "state": "completed", 00:18:02.679 "digest": "sha512", 00:18:02.679 "dhgroup": "ffdhe3072" 00:18:02.679 } 00:18:02.679 } 00:18:02.679 ]' 00:18:02.679 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.679 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.679 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.679 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.679 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.679 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.679 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.679 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.958 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:18:02.958 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:18:03.528 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.528 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.528 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.528 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.528 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.528 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.528 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.528 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.787 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:03.787 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.788 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.048 00:18:04.048 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.048 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.048 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.308 { 00:18:04.308 "cntlid": 119, 00:18:04.308 "qid": 0, 00:18:04.308 "state": "enabled", 00:18:04.308 "thread": "nvmf_tgt_poll_group_000", 00:18:04.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.308 "listen_address": { 00:18:04.308 "trtype": "TCP", 00:18:04.308 "adrfam": "IPv4", 00:18:04.308 "traddr": "10.0.0.2", 00:18:04.308 "trsvcid": "4420" 00:18:04.308 }, 00:18:04.308 "peer_address": { 00:18:04.308 "trtype": "TCP", 00:18:04.308 "adrfam": "IPv4", 00:18:04.308 "traddr": "10.0.0.1", 00:18:04.308 "trsvcid": "40946" 00:18:04.308 }, 00:18:04.308 "auth": { 00:18:04.308 "state": "completed", 00:18:04.308 "digest": "sha512", 00:18:04.308 "dhgroup": "ffdhe3072" 00:18:04.308 } 00:18:04.308 } 00:18:04.308 ]' 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.308 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.569 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:04.569 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.509 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.770 00:18:05.770 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.770 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.770 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.030 { 00:18:06.030 "cntlid": 121, 00:18:06.030 "qid": 0, 00:18:06.030 "state": "enabled", 00:18:06.030 "thread": "nvmf_tgt_poll_group_000", 00:18:06.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.030 "listen_address": { 00:18:06.030 "trtype": "TCP", 00:18:06.030 "adrfam": "IPv4", 00:18:06.030 "traddr": "10.0.0.2", 00:18:06.030 "trsvcid": "4420" 00:18:06.030 }, 00:18:06.030 "peer_address": { 00:18:06.030 "trtype": "TCP", 00:18:06.030 "adrfam": "IPv4", 00:18:06.030 "traddr": "10.0.0.1", 00:18:06.030 "trsvcid": "58184" 00:18:06.030 }, 00:18:06.030 "auth": { 00:18:06.030 "state": "completed", 00:18:06.030 "digest": "sha512", 00:18:06.030 "dhgroup": "ffdhe4096" 00:18:06.030 } 00:18:06.030 } 00:18:06.030 ]' 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.030 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.290 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:18:06.290 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.229 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.490 00:18:07.490 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.490 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.490 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.751 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.751 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.751 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.751 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.751 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.751 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.751 { 00:18:07.751 "cntlid": 123, 00:18:07.751 "qid": 0, 00:18:07.751 "state": "enabled", 00:18:07.751 "thread": "nvmf_tgt_poll_group_000", 00:18:07.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.751 "listen_address": { 00:18:07.751 "trtype": "TCP", 00:18:07.751 "adrfam": "IPv4", 00:18:07.751 "traddr": "10.0.0.2", 00:18:07.751 "trsvcid": "4420" 00:18:07.751 }, 00:18:07.751 "peer_address": { 00:18:07.751 "trtype": "TCP", 00:18:07.751 "adrfam": "IPv4", 00:18:07.751 "traddr": "10.0.0.1", 00:18:07.751 "trsvcid": "58222" 00:18:07.751 }, 00:18:07.751 "auth": { 00:18:07.751 "state": "completed", 00:18:07.751 "digest": "sha512", 00:18:07.751 "dhgroup": "ffdhe4096" 00:18:07.751 } 00:18:07.751 } 00:18:07.751 ]' 00:18:07.751 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.751 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.751 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.751 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.751 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.751 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.751 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.751 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.012 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:18:08.012 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:18:08.953 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.953 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.953 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.953 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.954 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.215 00:18:09.215 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.215 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.216 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.476 { 00:18:09.476 "cntlid": 125, 00:18:09.476 "qid": 0, 00:18:09.476 "state": "enabled", 00:18:09.476 "thread": "nvmf_tgt_poll_group_000", 00:18:09.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.476 "listen_address": { 00:18:09.476 "trtype": "TCP", 00:18:09.476 "adrfam": "IPv4", 00:18:09.476 "traddr": "10.0.0.2", 00:18:09.476 "trsvcid": "4420" 00:18:09.476 }, 00:18:09.476 "peer_address": { 00:18:09.476 "trtype": "TCP", 00:18:09.476 "adrfam": "IPv4", 00:18:09.476 "traddr": "10.0.0.1", 00:18:09.476 "trsvcid": "58248" 00:18:09.476 }, 00:18:09.476 "auth": { 00:18:09.476 "state": "completed", 00:18:09.476 "digest": "sha512", 00:18:09.476 "dhgroup": "ffdhe4096" 00:18:09.476 } 00:18:09.476 } 00:18:09.476 ]' 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.476 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.737 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.737 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.737 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.737 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:18:09.737 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:18:10.678 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.678 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.678 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.678 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.678 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.678 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.678 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.678 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.678 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.938 00:18:10.938 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.938 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.938 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.199 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.199 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.199 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.199 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.199 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.199 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.199 { 00:18:11.199 "cntlid": 127, 00:18:11.199 "qid": 0, 00:18:11.199 "state": "enabled", 00:18:11.199 "thread": "nvmf_tgt_poll_group_000", 00:18:11.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.199 "listen_address": { 00:18:11.199 "trtype": "TCP", 00:18:11.199 "adrfam": "IPv4", 00:18:11.199 "traddr": "10.0.0.2", 00:18:11.199 "trsvcid": "4420" 00:18:11.199 }, 00:18:11.199 "peer_address": { 00:18:11.199 "trtype": "TCP", 00:18:11.199 "adrfam": "IPv4", 00:18:11.199 "traddr": "10.0.0.1", 00:18:11.199 "trsvcid": "58288" 00:18:11.199 }, 00:18:11.199 "auth": { 00:18:11.199 "state": "completed", 00:18:11.199 "digest": "sha512", 00:18:11.199 "dhgroup": "ffdhe4096" 00:18:11.199 } 00:18:11.199 } 00:18:11.199 ]' 00:18:11.199 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.199 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.199 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.459 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.459 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.459 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.459 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.459 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.459 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:11.459 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.403 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.979 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.979 { 00:18:12.979 "cntlid": 129, 00:18:12.979 "qid": 0, 00:18:12.979 "state": "enabled", 00:18:12.979 "thread": "nvmf_tgt_poll_group_000", 00:18:12.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.979 "listen_address": { 00:18:12.979 "trtype": "TCP", 00:18:12.979 "adrfam": "IPv4", 00:18:12.979 "traddr": "10.0.0.2", 00:18:12.979 "trsvcid": "4420" 00:18:12.979 }, 00:18:12.979 "peer_address": { 00:18:12.979 "trtype": "TCP", 00:18:12.979 "adrfam": "IPv4", 00:18:12.979 "traddr": "10.0.0.1", 00:18:12.979 "trsvcid": "58318" 00:18:12.979 }, 00:18:12.979 "auth": { 00:18:12.979 "state": "completed", 00:18:12.979 "digest": "sha512", 00:18:12.979 "dhgroup": "ffdhe6144" 00:18:12.979 } 00:18:12.979 } 00:18:12.979 ]' 00:18:12.979 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.242 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.242 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.242 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.242 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.242 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.242 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.242 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.504 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:18:13.504 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:18:14.076 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.076 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.076 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.076 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.076 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.076 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.076 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.076 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.337 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.599 00:18:14.599 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.599 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.599 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.860 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.860 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.860 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.860 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.860 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.860 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.860 { 00:18:14.860 "cntlid": 131, 00:18:14.860 "qid": 0, 00:18:14.860 "state": "enabled", 00:18:14.860 "thread": "nvmf_tgt_poll_group_000", 00:18:14.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.860 "listen_address": { 00:18:14.860 "trtype": "TCP", 00:18:14.860 "adrfam": "IPv4", 00:18:14.860 "traddr": "10.0.0.2", 00:18:14.860 "trsvcid": "4420" 00:18:14.860 }, 00:18:14.860 "peer_address": { 00:18:14.860 "trtype": "TCP", 00:18:14.860 "adrfam": "IPv4", 00:18:14.860 "traddr": "10.0.0.1", 00:18:14.860 "trsvcid": "58356" 00:18:14.860 }, 00:18:14.860 "auth": { 00:18:14.860 "state": "completed", 00:18:14.860 "digest": "sha512", 00:18:14.860 "dhgroup": "ffdhe6144" 00:18:14.860 } 00:18:14.860 } 00:18:14.860 ]' 00:18:14.860 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.860 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.860 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.120 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.120 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.120 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.120 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.120 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.120 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:18:15.120 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:18:16.061 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.061 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.062 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.632 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.632 { 00:18:16.632 "cntlid": 133, 00:18:16.632 "qid": 0, 00:18:16.632 "state": "enabled", 00:18:16.632 "thread": "nvmf_tgt_poll_group_000", 00:18:16.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.632 "listen_address": { 00:18:16.632 "trtype": "TCP", 00:18:16.632 "adrfam": "IPv4", 00:18:16.632 "traddr": "10.0.0.2", 00:18:16.632 "trsvcid": "4420" 00:18:16.632 }, 00:18:16.632 "peer_address": { 00:18:16.632 "trtype": "TCP", 00:18:16.632 "adrfam": "IPv4", 00:18:16.632 "traddr": "10.0.0.1", 00:18:16.632 "trsvcid": "45082" 00:18:16.632 }, 00:18:16.632 "auth": { 00:18:16.632 "state": "completed", 00:18:16.632 "digest": "sha512", 00:18:16.632 "dhgroup": "ffdhe6144" 00:18:16.632 } 00:18:16.632 } 00:18:16.632 ]' 00:18:16.632 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.893 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.893 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.893 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.893 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.893 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.893 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.893 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.154 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:18:17.154 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:18:17.727 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.727 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.727 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.727 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.727 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.727 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.727 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.727 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.988 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.249 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.510 { 00:18:18.510 "cntlid": 135, 00:18:18.510 "qid": 0, 00:18:18.510 "state": "enabled", 00:18:18.510 "thread": "nvmf_tgt_poll_group_000", 00:18:18.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.510 "listen_address": { 00:18:18.510 "trtype": "TCP", 00:18:18.510 "adrfam": "IPv4", 00:18:18.510 "traddr": "10.0.0.2", 00:18:18.510 "trsvcid": "4420" 00:18:18.510 }, 00:18:18.510 "peer_address": { 00:18:18.510 "trtype": "TCP", 00:18:18.510 "adrfam": "IPv4", 00:18:18.510 "traddr": "10.0.0.1", 00:18:18.510 "trsvcid": "45118" 00:18:18.510 }, 00:18:18.510 "auth": { 00:18:18.510 "state": "completed", 00:18:18.510 "digest": "sha512", 00:18:18.510 "dhgroup": "ffdhe6144" 00:18:18.510 } 00:18:18.510 } 00:18:18.510 ]' 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.510 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.772 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.772 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.772 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.772 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.772 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.033 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:19.033 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:19.606 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.606 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.606 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.606 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.606 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.606 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.606 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.606 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.606 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.867 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.439 00:18:20.439 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.439 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.439 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.700 { 00:18:20.700 "cntlid": 137, 00:18:20.700 "qid": 0, 00:18:20.700 "state": "enabled", 00:18:20.700 "thread": "nvmf_tgt_poll_group_000", 00:18:20.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.700 "listen_address": { 00:18:20.700 "trtype": "TCP", 00:18:20.700 "adrfam": "IPv4", 00:18:20.700 "traddr": "10.0.0.2", 00:18:20.700 "trsvcid": "4420" 00:18:20.700 }, 00:18:20.700 "peer_address": { 00:18:20.700 "trtype": "TCP", 00:18:20.700 "adrfam": "IPv4", 00:18:20.700 "traddr": "10.0.0.1", 00:18:20.700 "trsvcid": "45148" 00:18:20.700 }, 00:18:20.700 "auth": { 00:18:20.700 "state": "completed", 00:18:20.700 "digest": "sha512", 00:18:20.700 "dhgroup": "ffdhe8192" 00:18:20.700 } 00:18:20.700 } 00:18:20.700 ]' 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.700 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.960 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:18:20.960 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:18:21.533 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.533 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.533 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.533 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.533 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.533 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.533 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.533 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.794 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.365 00:18:22.365 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.365 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.365 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.627 { 00:18:22.627 "cntlid": 139, 00:18:22.627 "qid": 0, 00:18:22.627 "state": "enabled", 00:18:22.627 "thread": "nvmf_tgt_poll_group_000", 00:18:22.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.627 "listen_address": { 00:18:22.627 "trtype": "TCP", 00:18:22.627 "adrfam": "IPv4", 00:18:22.627 "traddr": "10.0.0.2", 00:18:22.627 "trsvcid": "4420" 00:18:22.627 }, 00:18:22.627 "peer_address": { 00:18:22.627 "trtype": "TCP", 00:18:22.627 "adrfam": "IPv4", 00:18:22.627 "traddr": "10.0.0.1", 00:18:22.627 "trsvcid": "45184" 00:18:22.627 }, 00:18:22.627 "auth": { 00:18:22.627 "state": "completed", 00:18:22.627 "digest": "sha512", 00:18:22.627 "dhgroup": "ffdhe8192" 00:18:22.627 } 00:18:22.627 } 00:18:22.627 ]' 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.627 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.888 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:18:22.888 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: --dhchap-ctrl-secret DHHC-1:02:ZjkyZjQ5YTE1MDcyZjI3MjllOTk4MGYyYTQwMTI3YmVkNTkyNWRlODVjMWJiNThlEBhEgg==: 00:18:23.460 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.460 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.460 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.460 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.460 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.460 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.460 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.460 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.721 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.293 00:18:24.293 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.293 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.293 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.554 { 00:18:24.554 "cntlid": 141, 00:18:24.554 "qid": 0, 00:18:24.554 "state": "enabled", 00:18:24.554 "thread": "nvmf_tgt_poll_group_000", 00:18:24.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.554 "listen_address": { 00:18:24.554 "trtype": "TCP", 00:18:24.554 "adrfam": "IPv4", 00:18:24.554 "traddr": "10.0.0.2", 00:18:24.554 "trsvcid": "4420" 00:18:24.554 }, 00:18:24.554 "peer_address": { 00:18:24.554 "trtype": "TCP", 00:18:24.554 "adrfam": "IPv4", 00:18:24.554 "traddr": "10.0.0.1", 00:18:24.554 "trsvcid": "45202" 00:18:24.554 }, 00:18:24.554 "auth": { 00:18:24.554 "state": "completed", 00:18:24.554 "digest": "sha512", 00:18:24.554 "dhgroup": "ffdhe8192" 00:18:24.554 } 00:18:24.554 } 00:18:24.554 ]' 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.554 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.816 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:18:24.816 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:01:Zjc5Y2UxY2UxMDNjZmY1NmVlNWQ3YmQ3MmMyMDY3ODYS3Fu1: 00:18:25.387 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.387 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.387 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.387 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.387 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.387 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.387 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.387 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.648 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.219 00:18:26.219 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.219 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.219 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.479 { 00:18:26.479 "cntlid": 143, 00:18:26.479 "qid": 0, 00:18:26.479 "state": "enabled", 00:18:26.479 "thread": "nvmf_tgt_poll_group_000", 00:18:26.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.479 "listen_address": { 00:18:26.479 "trtype": "TCP", 00:18:26.479 "adrfam": "IPv4", 00:18:26.479 "traddr": "10.0.0.2", 00:18:26.479 "trsvcid": "4420" 00:18:26.479 }, 00:18:26.479 "peer_address": { 00:18:26.479 "trtype": "TCP", 00:18:26.479 "adrfam": "IPv4", 00:18:26.479 "traddr": "10.0.0.1", 00:18:26.479 "trsvcid": "53480" 00:18:26.479 }, 00:18:26.479 "auth": { 00:18:26.479 "state": "completed", 00:18:26.479 "digest": "sha512", 00:18:26.479 "dhgroup": "ffdhe8192" 00:18:26.479 } 00:18:26.479 } 00:18:26.479 ]' 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.479 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.739 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:26.739 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.309 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.570 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.142 00:18:28.142 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.142 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.142 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.402 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.403 { 00:18:28.403 "cntlid": 145, 00:18:28.403 "qid": 0, 00:18:28.403 "state": "enabled", 00:18:28.403 "thread": "nvmf_tgt_poll_group_000", 00:18:28.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.403 "listen_address": { 00:18:28.403 "trtype": "TCP", 00:18:28.403 "adrfam": "IPv4", 00:18:28.403 "traddr": "10.0.0.2", 00:18:28.403 "trsvcid": "4420" 00:18:28.403 }, 00:18:28.403 "peer_address": { 00:18:28.403 "trtype": "TCP", 00:18:28.403 "adrfam": "IPv4", 00:18:28.403 "traddr": "10.0.0.1", 00:18:28.403 "trsvcid": "53518" 00:18:28.403 }, 00:18:28.403 "auth": { 00:18:28.403 "state": "completed", 00:18:28.403 "digest": "sha512", 00:18:28.403 "dhgroup": "ffdhe8192" 00:18:28.403 } 00:18:28.403 } 00:18:28.403 ]' 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.403 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.663 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:18:28.663 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTJhMGZkZWZhMzVjZDgwMTJlZmRmY2JiZjM3ZTg2MTgyZWRmOTJkYjdmYTUwZjExCBMi/w==: --dhchap-ctrl-secret DHHC-1:03:ZjkwZWQ4NjRlYmZhMDlhZGNhMmJiYjg3Y2Q5MDNmZDRkNDQ0OWMwOGYxMzcxNGViZGRiODE0YjExYTZlNDhkNlnb2Ho=: 00:18:29.232 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:29.492 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:29.753 request: 00:18:29.753 { 00:18:29.753 "name": "nvme0", 00:18:29.753 "trtype": "tcp", 00:18:29.753 "traddr": "10.0.0.2", 00:18:29.753 "adrfam": "ipv4", 00:18:29.753 "trsvcid": "4420", 00:18:29.753 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.753 "prchk_reftag": false, 00:18:29.753 "prchk_guard": false, 00:18:29.753 "hdgst": false, 00:18:29.753 "ddgst": false, 00:18:29.753 "dhchap_key": "key2", 00:18:29.753 "allow_unrecognized_csi": false, 00:18:29.753 "method": "bdev_nvme_attach_controller", 00:18:29.753 "req_id": 1 00:18:29.753 } 00:18:29.753 Got JSON-RPC error response 00:18:29.753 response: 00:18:29.753 { 00:18:29.753 "code": -5, 00:18:29.753 "message": "Input/output error" 00:18:29.753 } 00:18:29.753 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:29.753 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.753 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.753 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.753 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:30.013 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:30.273 request: 00:18:30.273 { 00:18:30.273 "name": "nvme0", 00:18:30.273 "trtype": "tcp", 00:18:30.273 "traddr": "10.0.0.2", 00:18:30.273 "adrfam": "ipv4", 00:18:30.273 "trsvcid": "4420", 00:18:30.273 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.273 "prchk_reftag": false, 00:18:30.273 "prchk_guard": false, 00:18:30.273 "hdgst": false, 00:18:30.273 "ddgst": false, 00:18:30.273 "dhchap_key": "key1", 00:18:30.273 "dhchap_ctrlr_key": "ckey2", 00:18:30.273 "allow_unrecognized_csi": false, 00:18:30.273 "method": "bdev_nvme_attach_controller", 00:18:30.273 "req_id": 1 00:18:30.273 } 00:18:30.273 Got JSON-RPC error response 00:18:30.273 response: 00:18:30.273 { 00:18:30.273 "code": -5, 00:18:30.273 "message": "Input/output error" 00:18:30.273 } 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.533 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.794 request: 00:18:30.794 { 00:18:30.794 "name": "nvme0", 00:18:30.794 "trtype": "tcp", 00:18:30.794 "traddr": "10.0.0.2", 00:18:30.794 "adrfam": "ipv4", 00:18:30.794 "trsvcid": "4420", 00:18:30.794 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.794 "prchk_reftag": false, 00:18:30.794 "prchk_guard": false, 00:18:30.794 "hdgst": false, 00:18:30.794 "ddgst": false, 00:18:30.794 "dhchap_key": "key1", 00:18:30.794 "dhchap_ctrlr_key": "ckey1", 00:18:30.794 "allow_unrecognized_csi": false, 00:18:30.794 "method": "bdev_nvme_attach_controller", 00:18:30.794 "req_id": 1 00:18:30.794 } 00:18:30.794 Got JSON-RPC error response 00:18:30.794 response: 00:18:30.794 { 00:18:30.794 "code": -5, 00:18:30.794 "message": "Input/output error" 00:18:30.794 } 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 611924 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 611924 ']' 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 611924 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 611924 00:18:31.054 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 611924' 00:18:31.055 killing process with pid 611924 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 611924 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 611924 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=639795 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 639795 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 639795 ']' 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.055 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 639795 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 639795 ']' 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.996 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.257 null0 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x79 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Q6U ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q6U 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rNG 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.VKy ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VKy 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3kV 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.JXd ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JXd 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ndl 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.257 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:32.517 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.518 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.458 nvme0n1 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.458 { 00:18:33.458 "cntlid": 1, 00:18:33.458 "qid": 0, 00:18:33.458 "state": "enabled", 00:18:33.458 "thread": "nvmf_tgt_poll_group_000", 00:18:33.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.458 "listen_address": { 00:18:33.458 "trtype": "TCP", 00:18:33.458 "adrfam": "IPv4", 00:18:33.458 "traddr": "10.0.0.2", 00:18:33.458 "trsvcid": "4420" 00:18:33.458 }, 00:18:33.458 "peer_address": { 00:18:33.458 "trtype": "TCP", 00:18:33.458 "adrfam": "IPv4", 00:18:33.458 "traddr": "10.0.0.1", 00:18:33.458 "trsvcid": "53570" 00:18:33.458 }, 00:18:33.458 "auth": { 00:18:33.458 "state": "completed", 00:18:33.458 "digest": "sha512", 00:18:33.458 "dhgroup": "ffdhe8192" 00:18:33.458 } 00:18:33.458 } 00:18:33.458 ]' 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.458 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.459 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.459 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.718 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:33.718 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.659 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.920 request: 00:18:34.920 { 00:18:34.920 "name": "nvme0", 00:18:34.920 "trtype": "tcp", 00:18:34.920 "traddr": "10.0.0.2", 00:18:34.920 "adrfam": "ipv4", 00:18:34.920 "trsvcid": "4420", 00:18:34.920 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.920 "prchk_reftag": false, 00:18:34.920 "prchk_guard": false, 00:18:34.920 "hdgst": false, 00:18:34.920 "ddgst": false, 00:18:34.920 "dhchap_key": "key3", 00:18:34.920 "allow_unrecognized_csi": false, 00:18:34.920 "method": "bdev_nvme_attach_controller", 00:18:34.920 "req_id": 1 00:18:34.920 } 00:18:34.920 Got JSON-RPC error response 00:18:34.920 response: 00:18:34.920 { 00:18:34.920 "code": -5, 00:18:34.920 "message": "Input/output error" 00:18:34.920 } 00:18:34.920 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:34.920 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.920 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.920 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.920 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:34.920 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:34.920 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:34.920 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.180 request: 00:18:35.180 { 00:18:35.180 "name": "nvme0", 00:18:35.180 "trtype": "tcp", 00:18:35.180 "traddr": "10.0.0.2", 00:18:35.180 "adrfam": "ipv4", 00:18:35.180 "trsvcid": "4420", 00:18:35.180 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:35.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.180 "prchk_reftag": false, 00:18:35.180 "prchk_guard": false, 00:18:35.180 "hdgst": false, 00:18:35.180 "ddgst": false, 00:18:35.180 "dhchap_key": "key3", 00:18:35.180 "allow_unrecognized_csi": false, 00:18:35.180 "method": "bdev_nvme_attach_controller", 00:18:35.180 "req_id": 1 00:18:35.180 } 00:18:35.180 Got JSON-RPC error response 00:18:35.180 response: 00:18:35.180 { 00:18:35.180 "code": -5, 00:18:35.180 "message": "Input/output error" 00:18:35.180 } 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.180 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.441 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.700 request: 00:18:35.700 { 00:18:35.700 "name": "nvme0", 00:18:35.700 "trtype": "tcp", 00:18:35.700 "traddr": "10.0.0.2", 00:18:35.700 "adrfam": "ipv4", 00:18:35.700 "trsvcid": "4420", 00:18:35.700 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:35.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.700 "prchk_reftag": false, 00:18:35.700 "prchk_guard": false, 00:18:35.700 "hdgst": false, 00:18:35.700 "ddgst": false, 00:18:35.700 "dhchap_key": "key0", 00:18:35.700 "dhchap_ctrlr_key": "key1", 00:18:35.700 "allow_unrecognized_csi": false, 00:18:35.700 "method": "bdev_nvme_attach_controller", 00:18:35.700 "req_id": 1 00:18:35.700 } 00:18:35.700 Got JSON-RPC error response 00:18:35.700 response: 00:18:35.700 { 00:18:35.700 "code": -5, 00:18:35.700 "message": "Input/output error" 00:18:35.700 } 00:18:35.701 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:35.701 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:35.701 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:35.701 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:35.701 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:35.701 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:35.701 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:35.960 nvme0n1 00:18:35.960 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:35.960 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:35.960 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.220 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.220 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.220 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.482 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:36.482 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.482 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.482 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.482 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:36.482 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:36.482 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:37.423 nvme0n1 00:18:37.423 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:37.423 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:37.424 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.424 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.424 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.424 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.424 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.424 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.424 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:37.424 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.424 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:37.684 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.684 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:37.684 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: --dhchap-ctrl-secret DHHC-1:03:ODFhYjU4ZDc0MzEwMWMxNjgwZTI4YmU3ZDZmOTI2MjlkYzg5NWEwM2VhZGY4ZjE4NDM0MDc4NmI3ZmFmZGIwMUg5g+0=: 00:18:38.625 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:38.625 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:38.625 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:38.626 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:39.197 request: 00:18:39.197 { 00:18:39.197 "name": "nvme0", 00:18:39.197 "trtype": "tcp", 00:18:39.197 "traddr": "10.0.0.2", 00:18:39.197 "adrfam": "ipv4", 00:18:39.197 "trsvcid": "4420", 00:18:39.197 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.197 "prchk_reftag": false, 00:18:39.197 "prchk_guard": false, 00:18:39.197 "hdgst": false, 00:18:39.197 "ddgst": false, 00:18:39.197 "dhchap_key": "key1", 00:18:39.197 "allow_unrecognized_csi": false, 00:18:39.197 "method": "bdev_nvme_attach_controller", 00:18:39.197 "req_id": 1 00:18:39.197 } 00:18:39.197 Got JSON-RPC error response 00:18:39.197 response: 00:18:39.197 { 00:18:39.197 "code": -5, 00:18:39.197 "message": "Input/output error" 00:18:39.197 } 00:18:39.197 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:39.197 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.197 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.197 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.197 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.197 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.197 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.140 nvme0n1 00:18:40.140 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:40.140 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:40.140 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.140 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.140 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.140 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.140 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.401 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.401 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.401 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.401 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:40.401 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:40.401 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:40.401 nvme0n1 00:18:40.401 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:40.401 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:40.401 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.662 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.662 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.662 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: '' 2s 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: ]] 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MmZmMTNmM2Q3MThkYjM5MDEzMWU1NWE1NjBmMTg2ZTnGYGMS: 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:40.922 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: 2s 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: ]] 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTA3M2E1NGNhNGE0OGQzYzgzNGU1OWEwZTBmODhkZDczYzljYjVjZWEzYzBkMDdlBGumrQ==: 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:42.835 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:45.394 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:45.966 nvme0n1 00:18:45.966 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:45.966 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.966 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.966 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.966 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:45.966 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:46.536 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:46.797 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:46.797 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:46.798 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:47.059 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:47.631 request: 00:18:47.631 { 00:18:47.631 "name": "nvme0", 00:18:47.631 "dhchap_key": "key1", 00:18:47.631 "dhchap_ctrlr_key": "key3", 00:18:47.631 "method": "bdev_nvme_set_keys", 00:18:47.631 "req_id": 1 00:18:47.631 } 00:18:47.631 Got JSON-RPC error response 00:18:47.631 response: 00:18:47.631 { 00:18:47.631 "code": -13, 00:18:47.631 "message": "Permission denied" 00:18:47.631 } 00:18:47.631 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:47.631 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.631 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.631 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.631 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:47.631 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:47.631 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.631 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:47.631 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:48.572 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:48.572 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:48.572 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.833 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:48.833 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:48.833 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.833 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.833 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.833 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:48.833 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:48.833 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:49.775 nvme0n1 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:49.775 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:50.347 request: 00:18:50.347 { 00:18:50.347 "name": "nvme0", 00:18:50.347 "dhchap_key": "key2", 00:18:50.347 "dhchap_ctrlr_key": "key0", 00:18:50.347 "method": "bdev_nvme_set_keys", 00:18:50.347 "req_id": 1 00:18:50.347 } 00:18:50.347 Got JSON-RPC error response 00:18:50.347 response: 00:18:50.347 { 00:18:50.347 "code": -13, 00:18:50.347 "message": "Permission denied" 00:18:50.347 } 00:18:50.347 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:50.347 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:50.347 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:50.347 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:50.347 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:50.347 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:50.347 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.347 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:50.347 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 612151 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 612151 ']' 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 612151 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 612151 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 612151' 00:18:51.732 killing process with pid 612151 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 612151 00:18:51.732 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 612151 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:51.993 rmmod nvme_tcp 00:18:51.993 rmmod nvme_fabrics 00:18:51.993 rmmod nvme_keyring 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 639795 ']' 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 639795 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 639795 ']' 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 639795 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 639795 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 639795' 00:18:51.993 killing process with pid 639795 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 639795 00:18:51.993 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 639795 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.254 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.166 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:54.166 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.x79 /tmp/spdk.key-sha256.rNG /tmp/spdk.key-sha384.3kV /tmp/spdk.key-sha512.ndl /tmp/spdk.key-sha512.Q6U /tmp/spdk.key-sha384.VKy /tmp/spdk.key-sha256.JXd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:54.166 00:18:54.167 real 2m45.153s 00:18:54.167 user 6m8.583s 00:18:54.167 sys 0m24.407s 00:18:54.167 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:54.167 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.167 ************************************ 00:18:54.167 END TEST nvmf_auth_target 00:18:54.167 ************************************ 00:18:54.167 13:43:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:54.167 13:43:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:54.167 13:43:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:54.167 13:43:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:54.167 13:43:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.435 ************************************ 00:18:54.435 START TEST nvmf_bdevio_no_huge 00:18:54.435 ************************************ 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:54.435 * Looking for test storage... 00:18:54.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:54.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.435 --rc genhtml_branch_coverage=1 00:18:54.435 --rc genhtml_function_coverage=1 00:18:54.435 --rc genhtml_legend=1 00:18:54.435 --rc geninfo_all_blocks=1 00:18:54.435 --rc geninfo_unexecuted_blocks=1 00:18:54.435 00:18:54.435 ' 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:54.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.435 --rc genhtml_branch_coverage=1 00:18:54.435 --rc genhtml_function_coverage=1 00:18:54.435 --rc genhtml_legend=1 00:18:54.435 --rc geninfo_all_blocks=1 00:18:54.435 --rc geninfo_unexecuted_blocks=1 00:18:54.435 00:18:54.435 ' 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:54.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.435 --rc genhtml_branch_coverage=1 00:18:54.435 --rc genhtml_function_coverage=1 00:18:54.435 --rc genhtml_legend=1 00:18:54.435 --rc geninfo_all_blocks=1 00:18:54.435 --rc geninfo_unexecuted_blocks=1 00:18:54.435 00:18:54.435 ' 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:54.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.435 --rc genhtml_branch_coverage=1 00:18:54.435 --rc genhtml_function_coverage=1 00:18:54.435 --rc genhtml_legend=1 00:18:54.435 --rc geninfo_all_blocks=1 00:18:54.435 --rc geninfo_unexecuted_blocks=1 00:18:54.435 00:18:54.435 ' 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.435 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:54.436 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:02.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:02.705 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.705 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:02.706 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:02.706 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:02.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:19:02.706 00:19:02.706 --- 10.0.0.2 ping statistics --- 00:19:02.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.706 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:19:02.706 00:19:02.706 --- 10.0.0.1 ping statistics --- 00:19:02.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.706 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=648223 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 648223 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 648223 ']' 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.706 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:02.707 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.707 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:02.707 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.707 [2024-11-06 13:43:25.046003] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:02.707 [2024-11-06 13:43:25.046061] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:02.707 [2024-11-06 13:43:25.149016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:02.707 [2024-11-06 13:43:25.209295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.707 [2024-11-06 13:43:25.209343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.707 [2024-11-06 13:43:25.209353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.707 [2024-11-06 13:43:25.209360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.707 [2024-11-06 13:43:25.209366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.707 [2024-11-06 13:43:25.210896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:02.707 [2024-11-06 13:43:25.211135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:02.707 [2024-11-06 13:43:25.211384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:02.707 [2024-11-06 13:43:25.211499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.707 [2024-11-06 13:43:25.923246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.707 Malloc0 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.707 [2024-11-06 13:43:25.977227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:02.707 { 00:19:02.707 "params": { 00:19:02.707 "name": "Nvme$subsystem", 00:19:02.707 "trtype": "$TEST_TRANSPORT", 00:19:02.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.707 "adrfam": "ipv4", 00:19:02.707 "trsvcid": "$NVMF_PORT", 00:19:02.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.707 "hdgst": ${hdgst:-false}, 00:19:02.707 "ddgst": ${ddgst:-false} 00:19:02.707 }, 00:19:02.707 "method": "bdev_nvme_attach_controller" 00:19:02.707 } 00:19:02.707 EOF 00:19:02.707 )") 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:02.707 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:02.707 "params": { 00:19:02.707 "name": "Nvme1", 00:19:02.707 "trtype": "tcp", 00:19:02.707 "traddr": "10.0.0.2", 00:19:02.707 "adrfam": "ipv4", 00:19:02.707 "trsvcid": "4420", 00:19:02.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.707 "hdgst": false, 00:19:02.707 "ddgst": false 00:19:02.707 }, 00:19:02.707 "method": "bdev_nvme_attach_controller" 00:19:02.707 }' 00:19:02.707 [2024-11-06 13:43:26.043798] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:02.708 [2024-11-06 13:43:26.043871] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid648321 ] 00:19:02.968 [2024-11-06 13:43:26.125438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.968 [2024-11-06 13:43:26.181398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.968 [2024-11-06 13:43:26.181524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.968 [2024-11-06 13:43:26.181528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.231 I/O targets: 00:19:03.231 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:03.231 00:19:03.231 00:19:03.231 CUnit - A unit testing framework for C - Version 2.1-3 00:19:03.231 http://cunit.sourceforge.net/ 00:19:03.231 00:19:03.231 00:19:03.231 Suite: bdevio tests on: Nvme1n1 00:19:03.231 Test: blockdev write read block ...passed 00:19:03.231 Test: blockdev write zeroes read block ...passed 00:19:03.231 Test: blockdev write zeroes read no split ...passed 00:19:03.231 Test: blockdev write zeroes read split ...passed 00:19:03.231 Test: blockdev write zeroes read split partial ...passed 00:19:03.231 Test: blockdev reset ...[2024-11-06 13:43:26.476584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:03.231 [2024-11-06 13:43:26.476653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1800 (9): Bad file descriptor 00:19:03.492 [2024-11-06 13:43:26.626488] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:03.492 passed 00:19:03.492 Test: blockdev write read 8 blocks ...passed 00:19:03.492 Test: blockdev write read size > 128k ...passed 00:19:03.492 Test: blockdev write read invalid size ...passed 00:19:03.492 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.492 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.492 Test: blockdev write read max offset ...passed 00:19:03.492 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.492 Test: blockdev writev readv 8 blocks ...passed 00:19:03.492 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.492 Test: blockdev writev readv block ...passed 00:19:03.492 Test: blockdev writev readv size > 128k ...passed 00:19:03.492 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.492 Test: blockdev comparev and writev ...[2024-11-06 13:43:26.850882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.492 [2024-11-06 13:43:26.850906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.492 [2024-11-06 13:43:26.850918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.492 [2024-11-06 13:43:26.850924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:03.492 [2024-11-06 13:43:26.851414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.492 [2024-11-06 13:43:26.851422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:03.492 [2024-11-06 13:43:26.851436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.492 [2024-11-06 13:43:26.851442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:03.492 [2024-11-06 13:43:26.851911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.492 [2024-11-06 13:43:26.851919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:03.492 [2024-11-06 13:43:26.851928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.492 [2024-11-06 13:43:26.851933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:03.492 [2024-11-06 13:43:26.852431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.492 [2024-11-06 13:43:26.852439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:03.492 [2024-11-06 13:43:26.852448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.492 [2024-11-06 13:43:26.852453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:03.753 passed 00:19:03.753 Test: blockdev nvme passthru rw ...passed 00:19:03.753 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:43:26.937531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.753 [2024-11-06 13:43:26.937542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:03.753 [2024-11-06 13:43:26.937885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.753 [2024-11-06 13:43:26.937894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:03.753 [2024-11-06 13:43:26.938248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.753 [2024-11-06 13:43:26.938255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:03.753 [2024-11-06 13:43:26.938586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.754 [2024-11-06 13:43:26.938599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:03.754 passed 00:19:03.754 Test: blockdev nvme admin passthru ...passed 00:19:03.754 Test: blockdev copy ...passed 00:19:03.754 00:19:03.754 Run Summary: Type Total Ran Passed Failed Inactive 00:19:03.754 suites 1 1 n/a 0 0 00:19:03.754 tests 23 23 23 0 0 00:19:03.754 asserts 152 152 152 0 n/a 00:19:03.754 00:19:03.754 Elapsed time = 1.318 seconds 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:04.015 rmmod nvme_tcp 00:19:04.015 rmmod nvme_fabrics 00:19:04.015 rmmod nvme_keyring 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 648223 ']' 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 648223 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 648223 ']' 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 648223 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:04.015 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 648223 00:19:04.277 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:19:04.277 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:19:04.277 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 648223' 00:19:04.277 killing process with pid 648223 00:19:04.277 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 648223 00:19:04.277 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 648223 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.539 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.088 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:07.088 00:19:07.088 real 0m12.305s 00:19:07.088 user 0m14.142s 00:19:07.088 sys 0m6.470s 00:19:07.088 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:07.088 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.088 ************************************ 00:19:07.088 END TEST nvmf_bdevio_no_huge 00:19:07.088 ************************************ 00:19:07.088 13:43:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:07.088 13:43:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:07.088 13:43:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:07.088 13:43:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.088 ************************************ 00:19:07.088 START TEST nvmf_tls 00:19:07.088 ************************************ 00:19:07.088 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:07.088 * Looking for test storage... 00:19:07.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.088 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:07.088 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:07.088 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:07.088 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:07.088 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.088 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.088 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.088 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.088 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:07.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.089 --rc genhtml_branch_coverage=1 00:19:07.089 --rc genhtml_function_coverage=1 00:19:07.089 --rc genhtml_legend=1 00:19:07.089 --rc geninfo_all_blocks=1 00:19:07.089 --rc geninfo_unexecuted_blocks=1 00:19:07.089 00:19:07.089 ' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:07.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.089 --rc genhtml_branch_coverage=1 00:19:07.089 --rc genhtml_function_coverage=1 00:19:07.089 --rc genhtml_legend=1 00:19:07.089 --rc geninfo_all_blocks=1 00:19:07.089 --rc geninfo_unexecuted_blocks=1 00:19:07.089 00:19:07.089 ' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:07.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.089 --rc genhtml_branch_coverage=1 00:19:07.089 --rc genhtml_function_coverage=1 00:19:07.089 --rc genhtml_legend=1 00:19:07.089 --rc geninfo_all_blocks=1 00:19:07.089 --rc geninfo_unexecuted_blocks=1 00:19:07.089 00:19:07.089 ' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:07.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.089 --rc genhtml_branch_coverage=1 00:19:07.089 --rc genhtml_function_coverage=1 00:19:07.089 --rc genhtml_legend=1 00:19:07.089 --rc geninfo_all_blocks=1 00:19:07.089 --rc geninfo_unexecuted_blocks=1 00:19:07.089 00:19:07.089 ' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:07.089 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:07.090 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.235 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:15.236 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:15.236 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:15.236 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:15.236 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:15.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:19:15.236 00:19:15.236 --- 10.0.0.2 ping statistics --- 00:19:15.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.236 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:15.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:19:15.236 00:19:15.236 --- 10.0.0.1 ping statistics --- 00:19:15.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.236 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=652929 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 652929 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 652929 ']' 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:15.236 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.236 [2024-11-06 13:43:37.593581] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:15.237 [2024-11-06 13:43:37.593650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.237 [2024-11-06 13:43:37.696989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.237 [2024-11-06 13:43:37.747436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.237 [2024-11-06 13:43:37.747486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.237 [2024-11-06 13:43:37.747495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.237 [2024-11-06 13:43:37.747502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.237 [2024-11-06 13:43:37.747508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.237 [2024-11-06 13:43:37.748301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.237 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.237 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:15.237 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.237 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:15.237 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.237 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.237 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:15.237 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:15.497 true 00:19:15.497 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:15.497 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:15.497 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:15.497 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:15.497 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:15.758 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:15.758 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:16.018 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:16.018 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:16.018 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:16.288 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:16.288 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:16.288 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:16.288 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:16.288 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:16.288 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:16.550 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:16.550 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:16.550 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:16.810 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:16.810 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:16.810 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:16.810 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:16.810 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:17.071 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:17.071 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:17.331 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:17.331 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.2YUygqi0Zm 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.v7GbzbinxL 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2YUygqi0Zm 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.v7GbzbinxL 00:19:17.332 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:17.592 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:17.852 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.2YUygqi0Zm 00:19:17.852 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2YUygqi0Zm 00:19:17.852 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.112 [2024-11-06 13:43:41.240842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.112 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:18.112 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:18.374 [2024-11-06 13:43:41.601710] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.374 [2024-11-06 13:43:41.601937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.374 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:18.633 malloc0 00:19:18.633 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:18.633 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2YUygqi0Zm 00:19:18.892 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.152 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2YUygqi0Zm 00:19:29.151 Initializing NVMe Controllers 00:19:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:29.151 Initialization complete. Launching workers. 00:19:29.151 ======================================================== 00:19:29.151 Latency(us) 00:19:29.151 Device Information : IOPS MiB/s Average min max 00:19:29.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18701.84 73.05 3422.16 1042.39 4110.36 00:19:29.151 ======================================================== 00:19:29.151 Total : 18701.84 73.05 3422.16 1042.39 4110.36 00:19:29.151 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2YUygqi0Zm 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2YUygqi0Zm 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=655724 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 655724 /var/tmp/bdevperf.sock 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 655724 ']' 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:29.151 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.151 [2024-11-06 13:43:52.465255] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:29.151 [2024-11-06 13:43:52.465312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655724 ] 00:19:29.151 [2024-11-06 13:43:52.523509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.413 [2024-11-06 13:43:52.552689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.414 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:29.414 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:29.414 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2YUygqi0Zm 00:19:29.675 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.936 [2024-11-06 13:43:53.054215] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.936 TLSTESTn1 00:19:29.936 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:29.936 Running I/O for 10 seconds... 00:19:32.264 5343.00 IOPS, 20.87 MiB/s [2024-11-06T12:43:56.582Z] 5522.00 IOPS, 21.57 MiB/s [2024-11-06T12:43:57.523Z] 5761.67 IOPS, 22.51 MiB/s [2024-11-06T12:43:58.465Z] 5670.75 IOPS, 22.15 MiB/s [2024-11-06T12:43:59.406Z] 5648.60 IOPS, 22.06 MiB/s [2024-11-06T12:44:00.348Z] 5518.17 IOPS, 21.56 MiB/s [2024-11-06T12:44:01.300Z] 5369.14 IOPS, 20.97 MiB/s [2024-11-06T12:44:02.681Z] 5325.12 IOPS, 20.80 MiB/s [2024-11-06T12:44:03.252Z] 5283.89 IOPS, 20.64 MiB/s [2024-11-06T12:44:03.514Z] 5251.70 IOPS, 20.51 MiB/s 00:19:40.138 Latency(us) 00:19:40.138 [2024-11-06T12:44:03.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.138 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.138 Verification LBA range: start 0x0 length 0x2000 00:19:40.138 TLSTESTn1 : 10.02 5252.08 20.52 0.00 0.00 24329.54 5297.49 86507.52 00:19:40.138 [2024-11-06T12:44:03.514Z] =================================================================================================================== 00:19:40.138 [2024-11-06T12:44:03.514Z] Total : 5252.08 20.52 0.00 0.00 24329.54 5297.49 86507.52 00:19:40.138 { 00:19:40.138 "results": [ 00:19:40.138 { 00:19:40.138 "job": "TLSTESTn1", 00:19:40.138 "core_mask": "0x4", 00:19:40.138 "workload": "verify", 00:19:40.138 "status": "finished", 00:19:40.138 "verify_range": { 00:19:40.138 "start": 0, 00:19:40.138 "length": 8192 00:19:40.138 }, 00:19:40.138 "queue_depth": 128, 00:19:40.138 "io_size": 4096, 00:19:40.138 "runtime": 10.023455, 00:19:40.138 "iops": 5252.081243443503, 00:19:40.138 "mibps": 20.515942357201183, 00:19:40.138 "io_failed": 0, 00:19:40.138 "io_timeout": 0, 00:19:40.138 "avg_latency_us": 24329.543639541072, 00:19:40.138 "min_latency_us": 5297.493333333333, 00:19:40.138 "max_latency_us": 86507.52 00:19:40.138 } 00:19:40.138 ], 00:19:40.138 "core_count": 1 00:19:40.138 } 00:19:40.138 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.138 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 655724 00:19:40.138 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 655724 ']' 00:19:40.138 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 655724 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 655724 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 655724' 00:19:40.139 killing process with pid 655724 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 655724 00:19:40.139 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.139 00:19:40.139 Latency(us) 00:19:40.139 [2024-11-06T12:44:03.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.139 [2024-11-06T12:44:03.515Z] =================================================================================================================== 00:19:40.139 [2024-11-06T12:44:03.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 655724 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v7GbzbinxL 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v7GbzbinxL 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v7GbzbinxL 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.v7GbzbinxL 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=657940 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 657940 /var/tmp/bdevperf.sock 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 657940 ']' 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:40.139 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.139 [2024-11-06 13:44:03.479316] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:40.139 [2024-11-06 13:44:03.479363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657940 ] 00:19:40.400 [2024-11-06 13:44:03.529403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.400 [2024-11-06 13:44:03.558376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.400 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:40.400 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:40.400 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.v7GbzbinxL 00:19:40.661 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.661 [2024-11-06 13:44:04.003791] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.661 [2024-11-06 13:44:04.013636] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:40.661 [2024-11-06 13:44:04.014094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123fbb0 (107): Transport endpoint is not connected 00:19:40.661 [2024-11-06 13:44:04.015090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123fbb0 (9): Bad file descriptor 00:19:40.661 [2024-11-06 13:44:04.016091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:40.661 [2024-11-06 13:44:04.016100] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:40.661 [2024-11-06 13:44:04.016105] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:40.661 [2024-11-06 13:44:04.016113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:40.661 request: 00:19:40.661 { 00:19:40.661 "name": "TLSTEST", 00:19:40.661 "trtype": "tcp", 00:19:40.661 "traddr": "10.0.0.2", 00:19:40.661 "adrfam": "ipv4", 00:19:40.661 "trsvcid": "4420", 00:19:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.661 "prchk_reftag": false, 00:19:40.661 "prchk_guard": false, 00:19:40.661 "hdgst": false, 00:19:40.661 "ddgst": false, 00:19:40.661 "psk": "key0", 00:19:40.661 "allow_unrecognized_csi": false, 00:19:40.661 "method": "bdev_nvme_attach_controller", 00:19:40.661 "req_id": 1 00:19:40.661 } 00:19:40.661 Got JSON-RPC error response 00:19:40.661 response: 00:19:40.661 { 00:19:40.661 "code": -5, 00:19:40.661 "message": "Input/output error" 00:19:40.661 } 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 657940 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 657940 ']' 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 657940 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 657940 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 657940' 00:19:40.923 killing process with pid 657940 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 657940 00:19:40.923 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.923 00:19:40.923 Latency(us) 00:19:40.923 [2024-11-06T12:44:04.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.923 [2024-11-06T12:44:04.299Z] =================================================================================================================== 00:19:40.923 [2024-11-06T12:44:04.299Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 657940 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2YUygqi0Zm 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2YUygqi0Zm 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2YUygqi0Zm 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2YUygqi0Zm 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=658079 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 658079 /var/tmp/bdevperf.sock 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 658079 ']' 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:40.923 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.923 [2024-11-06 13:44:04.270293] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:40.923 [2024-11-06 13:44:04.270348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658079 ] 00:19:41.185 [2024-11-06 13:44:04.328779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.185 [2024-11-06 13:44:04.356556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.185 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:41.185 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:41.185 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2YUygqi0Zm 00:19:41.446 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:41.446 [2024-11-06 13:44:04.773617] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.446 [2024-11-06 13:44:04.778063] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:41.446 [2024-11-06 13:44:04.778086] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:41.446 [2024-11-06 13:44:04.778106] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:41.446 [2024-11-06 13:44:04.778749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dbbb0 (107): Transport endpoint is not connected 00:19:41.446 [2024-11-06 13:44:04.779740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dbbb0 (9): Bad file descriptor 00:19:41.446 [2024-11-06 13:44:04.780742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:41.446 [2024-11-06 13:44:04.780757] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:41.446 [2024-11-06 13:44:04.780763] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:41.446 [2024-11-06 13:44:04.780771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:41.446 request: 00:19:41.446 { 00:19:41.446 "name": "TLSTEST", 00:19:41.446 "trtype": "tcp", 00:19:41.446 "traddr": "10.0.0.2", 00:19:41.446 "adrfam": "ipv4", 00:19:41.446 "trsvcid": "4420", 00:19:41.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.446 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:41.446 "prchk_reftag": false, 00:19:41.446 "prchk_guard": false, 00:19:41.446 "hdgst": false, 00:19:41.446 "ddgst": false, 00:19:41.446 "psk": "key0", 00:19:41.446 "allow_unrecognized_csi": false, 00:19:41.446 "method": "bdev_nvme_attach_controller", 00:19:41.446 "req_id": 1 00:19:41.446 } 00:19:41.446 Got JSON-RPC error response 00:19:41.446 response: 00:19:41.446 { 00:19:41.446 "code": -5, 00:19:41.446 "message": "Input/output error" 00:19:41.446 } 00:19:41.446 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 658079 00:19:41.446 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 658079 ']' 00:19:41.446 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 658079 00:19:41.446 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:41.446 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:41.446 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 658079 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 658079' 00:19:41.707 killing process with pid 658079 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 658079 00:19:41.707 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.707 00:19:41.707 Latency(us) 00:19:41.707 [2024-11-06T12:44:05.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.707 [2024-11-06T12:44:05.083Z] =================================================================================================================== 00:19:41.707 [2024-11-06T12:44:05.083Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 658079 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2YUygqi0Zm 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2YUygqi0Zm 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2YUygqi0Zm 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2YUygqi0Zm 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=658203 00:19:41.707 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.708 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 658203 /var/tmp/bdevperf.sock 00:19:41.708 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.708 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 658203 ']' 00:19:41.708 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.708 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:41.708 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.708 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:41.708 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.708 [2024-11-06 13:44:05.021024] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:41.708 [2024-11-06 13:44:05.021081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658203 ] 00:19:41.708 [2024-11-06 13:44:05.079480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.968 [2024-11-06 13:44:05.107616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.968 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:41.968 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:41.968 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2YUygqi0Zm 00:19:42.229 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:42.229 [2024-11-06 13:44:05.524688] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.229 [2024-11-06 13:44:05.529248] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:42.229 [2024-11-06 13:44:05.529269] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:42.229 [2024-11-06 13:44:05.529296] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:42.229 [2024-11-06 13:44:05.529927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9cbb0 (107): Transport endpoint is not connected 00:19:42.229 [2024-11-06 13:44:05.530922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9cbb0 (9): Bad file descriptor 00:19:42.229 [2024-11-06 13:44:05.531924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:42.229 [2024-11-06 13:44:05.531931] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:42.229 [2024-11-06 13:44:05.531937] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:42.229 [2024-11-06 13:44:05.531944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:42.229 request: 00:19:42.229 { 00:19:42.229 "name": "TLSTEST", 00:19:42.229 "trtype": "tcp", 00:19:42.229 "traddr": "10.0.0.2", 00:19:42.229 "adrfam": "ipv4", 00:19:42.229 "trsvcid": "4420", 00:19:42.229 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:42.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.229 "prchk_reftag": false, 00:19:42.229 "prchk_guard": false, 00:19:42.229 "hdgst": false, 00:19:42.229 "ddgst": false, 00:19:42.229 "psk": "key0", 00:19:42.229 "allow_unrecognized_csi": false, 00:19:42.229 "method": "bdev_nvme_attach_controller", 00:19:42.229 "req_id": 1 00:19:42.229 } 00:19:42.229 Got JSON-RPC error response 00:19:42.229 response: 00:19:42.229 { 00:19:42.229 "code": -5, 00:19:42.229 "message": "Input/output error" 00:19:42.229 } 00:19:42.229 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 658203 00:19:42.229 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 658203 ']' 00:19:42.229 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 658203 00:19:42.229 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:42.229 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:42.229 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 658203 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 658203' 00:19:42.490 killing process with pid 658203 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 658203 00:19:42.490 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.490 00:19:42.490 Latency(us) 00:19:42.490 [2024-11-06T12:44:05.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.490 [2024-11-06T12:44:05.866Z] =================================================================================================================== 00:19:42.490 [2024-11-06T12:44:05.866Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 658203 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=658429 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 658429 /var/tmp/bdevperf.sock 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 658429 ']' 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:42.490 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.490 [2024-11-06 13:44:05.772562] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:42.490 [2024-11-06 13:44:05.772619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658429 ] 00:19:42.490 [2024-11-06 13:44:05.830670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.490 [2024-11-06 13:44:05.858513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.751 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:42.751 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:42.751 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:42.751 [2024-11-06 13:44:06.095060] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:42.751 [2024-11-06 13:44:06.095086] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:42.751 request: 00:19:42.751 { 00:19:42.751 "name": "key0", 00:19:42.751 "path": "", 00:19:42.751 "method": "keyring_file_add_key", 00:19:42.751 "req_id": 1 00:19:42.751 } 00:19:42.751 Got JSON-RPC error response 00:19:42.751 response: 00:19:42.751 { 00:19:42.751 "code": -1, 00:19:42.751 "message": "Operation not permitted" 00:19:42.751 } 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.012 [2024-11-06 13:44:06.279602] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.012 [2024-11-06 13:44:06.279629] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:43.012 request: 00:19:43.012 { 00:19:43.012 "name": "TLSTEST", 00:19:43.012 "trtype": "tcp", 00:19:43.012 "traddr": "10.0.0.2", 00:19:43.012 "adrfam": "ipv4", 00:19:43.012 "trsvcid": "4420", 00:19:43.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.012 "prchk_reftag": false, 00:19:43.012 "prchk_guard": false, 00:19:43.012 "hdgst": false, 00:19:43.012 "ddgst": false, 00:19:43.012 "psk": "key0", 00:19:43.012 "allow_unrecognized_csi": false, 00:19:43.012 "method": "bdev_nvme_attach_controller", 00:19:43.012 "req_id": 1 00:19:43.012 } 00:19:43.012 Got JSON-RPC error response 00:19:43.012 response: 00:19:43.012 { 00:19:43.012 "code": -126, 00:19:43.012 "message": "Required key not available" 00:19:43.012 } 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 658429 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 658429 ']' 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 658429 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 658429 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 658429' 00:19:43.012 killing process with pid 658429 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 658429 00:19:43.012 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.012 00:19:43.012 Latency(us) 00:19:43.012 [2024-11-06T12:44:06.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.012 [2024-11-06T12:44:06.388Z] =================================================================================================================== 00:19:43.012 [2024-11-06T12:44:06.388Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.012 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 658429 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 652929 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 652929 ']' 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 652929 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 652929 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 652929' 00:19:43.274 killing process with pid 652929 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 652929 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 652929 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:43.274 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.X9PJpj8rCT 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.X9PJpj8rCT 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=658565 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 658565 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 658565 ']' 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.535 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:43.535 [2024-11-06 13:44:06.758559] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:43.535 [2024-11-06 13:44:06.758620] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.535 [2024-11-06 13:44:06.848719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.535 [2024-11-06 13:44:06.878159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.535 [2024-11-06 13:44:06.878186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.535 [2024-11-06 13:44:06.878191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.535 [2024-11-06 13:44:06.878196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.535 [2024-11-06 13:44:06.878200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.535 [2024-11-06 13:44:06.878665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.X9PJpj8rCT 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.X9PJpj8rCT 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:44.478 [2024-11-06 13:44:07.709635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.478 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:44.739 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:44.739 [2024-11-06 13:44:08.030415] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.739 [2024-11-06 13:44:08.030636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.739 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.000 malloc0 00:19:45.000 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.000 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.X9PJpj8rCT 00:19:45.261 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X9PJpj8rCT 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.X9PJpj8rCT 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=659040 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 659040 /var/tmp/bdevperf.sock 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 659040 ']' 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.522 [2024-11-06 13:44:08.689803] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:45.522 [2024-11-06 13:44:08.689845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659040 ] 00:19:45.522 [2024-11-06 13:44:08.739710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.522 [2024-11-06 13:44:08.769121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:45.522 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X9PJpj8rCT 00:19:45.782 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.043 [2024-11-06 13:44:09.158494] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.043 TLSTESTn1 00:19:46.043 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:46.043 Running I/O for 10 seconds... 00:19:48.366 5576.00 IOPS, 21.78 MiB/s [2024-11-06T12:44:12.682Z] 5909.00 IOPS, 23.08 MiB/s [2024-11-06T12:44:13.623Z] 5914.33 IOPS, 23.10 MiB/s [2024-11-06T12:44:14.564Z] 5903.50 IOPS, 23.06 MiB/s [2024-11-06T12:44:15.506Z] 5873.20 IOPS, 22.94 MiB/s [2024-11-06T12:44:16.448Z] 5981.00 IOPS, 23.36 MiB/s [2024-11-06T12:44:17.395Z] 5933.86 IOPS, 23.18 MiB/s [2024-11-06T12:44:18.779Z] 5764.75 IOPS, 22.52 MiB/s [2024-11-06T12:44:19.721Z] 5686.11 IOPS, 22.21 MiB/s [2024-11-06T12:44:19.721Z] 5745.90 IOPS, 22.44 MiB/s 00:19:56.345 Latency(us) 00:19:56.345 [2024-11-06T12:44:19.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.345 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:56.345 Verification LBA range: start 0x0 length 0x2000 00:19:56.345 TLSTESTn1 : 10.02 5748.51 22.46 0.00 0.00 22231.42 4642.13 23156.05 00:19:56.345 [2024-11-06T12:44:19.721Z] =================================================================================================================== 00:19:56.345 [2024-11-06T12:44:19.721Z] Total : 5748.51 22.46 0.00 0.00 22231.42 4642.13 23156.05 00:19:56.345 { 00:19:56.345 "results": [ 00:19:56.345 { 00:19:56.345 "job": "TLSTESTn1", 00:19:56.345 "core_mask": "0x4", 00:19:56.345 "workload": "verify", 00:19:56.345 "status": "finished", 00:19:56.345 "verify_range": { 00:19:56.345 "start": 0, 00:19:56.345 "length": 8192 00:19:56.345 }, 00:19:56.345 "queue_depth": 128, 00:19:56.345 "io_size": 4096, 00:19:56.345 "runtime": 10.017727, 00:19:56.345 "iops": 5748.509617001941, 00:19:56.345 "mibps": 22.45511569141383, 00:19:56.345 "io_failed": 0, 00:19:56.345 "io_timeout": 0, 00:19:56.345 "avg_latency_us": 22231.415610699176, 00:19:56.345 "min_latency_us": 4642.133333333333, 00:19:56.345 "max_latency_us": 23156.053333333333 00:19:56.345 } 00:19:56.345 ], 00:19:56.345 "core_count": 1 00:19:56.345 } 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 659040 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 659040 ']' 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 659040 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 659040 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 659040' 00:19:56.345 killing process with pid 659040 00:19:56.345 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 659040 00:19:56.346 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.346 00:19:56.346 Latency(us) 00:19:56.346 [2024-11-06T12:44:19.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.346 [2024-11-06T12:44:19.722Z] =================================================================================================================== 00:19:56.346 [2024-11-06T12:44:19.722Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 659040 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.X9PJpj8rCT 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X9PJpj8rCT 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X9PJpj8rCT 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X9PJpj8rCT 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.X9PJpj8rCT 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=661154 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 661154 /var/tmp/bdevperf.sock 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 661154 ']' 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:56.346 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.346 [2024-11-06 13:44:19.631279] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:56.346 [2024-11-06 13:44:19.631335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661154 ] 00:19:56.346 [2024-11-06 13:44:19.690299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.346 [2024-11-06 13:44:19.718160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.606 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:56.606 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:56.606 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X9PJpj8rCT 00:19:56.606 [2024-11-06 13:44:19.954716] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.X9PJpj8rCT': 0100666 00:19:56.606 [2024-11-06 13:44:19.954743] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:56.606 request: 00:19:56.606 { 00:19:56.606 "name": "key0", 00:19:56.606 "path": "/tmp/tmp.X9PJpj8rCT", 00:19:56.606 "method": "keyring_file_add_key", 00:19:56.606 "req_id": 1 00:19:56.606 } 00:19:56.606 Got JSON-RPC error response 00:19:56.606 response: 00:19:56.606 { 00:19:56.606 "code": -1, 00:19:56.606 "message": "Operation not permitted" 00:19:56.606 } 00:19:56.866 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.866 [2024-11-06 13:44:20.147279] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.866 [2024-11-06 13:44:20.147310] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:56.866 request: 00:19:56.866 { 00:19:56.866 "name": "TLSTEST", 00:19:56.866 "trtype": "tcp", 00:19:56.866 "traddr": "10.0.0.2", 00:19:56.866 "adrfam": "ipv4", 00:19:56.866 "trsvcid": "4420", 00:19:56.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.866 "prchk_reftag": false, 00:19:56.866 "prchk_guard": false, 00:19:56.866 "hdgst": false, 00:19:56.866 "ddgst": false, 00:19:56.866 "psk": "key0", 00:19:56.866 "allow_unrecognized_csi": false, 00:19:56.866 "method": "bdev_nvme_attach_controller", 00:19:56.866 "req_id": 1 00:19:56.866 } 00:19:56.866 Got JSON-RPC error response 00:19:56.866 response: 00:19:56.866 { 00:19:56.866 "code": -126, 00:19:56.866 "message": "Required key not available" 00:19:56.866 } 00:19:56.866 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 661154 00:19:56.866 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 661154 ']' 00:19:56.866 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 661154 00:19:56.866 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:56.866 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:56.866 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 661154 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 661154' 00:19:57.126 killing process with pid 661154 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 661154 00:19:57.126 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.126 00:19:57.126 Latency(us) 00:19:57.126 [2024-11-06T12:44:20.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.126 [2024-11-06T12:44:20.502Z] =================================================================================================================== 00:19:57.126 [2024-11-06T12:44:20.502Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 661154 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 658565 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 658565 ']' 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 658565 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 658565 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 658565' 00:19:57.126 killing process with pid 658565 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 658565 00:19:57.126 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 658565 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=661254 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 661254 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 661254 ']' 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.386 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:57.386 [2024-11-06 13:44:20.571584] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:57.386 [2024-11-06 13:44:20.571643] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.386 [2024-11-06 13:44:20.661222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.386 [2024-11-06 13:44:20.691759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.386 [2024-11-06 13:44:20.691784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.386 [2024-11-06 13:44:20.691790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.386 [2024-11-06 13:44:20.691795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.386 [2024-11-06 13:44:20.691799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.386 [2024-11-06 13:44:20.692280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.957 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:57.957 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:57.957 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:57.957 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:57.957 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.X9PJpj8rCT 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.X9PJpj8rCT 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.X9PJpj8rCT 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.X9PJpj8rCT 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:58.218 [2024-11-06 13:44:21.528363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.218 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:58.480 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:58.480 [2024-11-06 13:44:21.849153] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.480 [2024-11-06 13:44:21.849347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.740 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:58.740 malloc0 00:19:58.740 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:59.000 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.X9PJpj8rCT 00:19:59.000 [2024-11-06 13:44:22.323926] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.X9PJpj8rCT': 0100666 00:19:59.000 [2024-11-06 13:44:22.323945] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:59.000 request: 00:19:59.000 { 00:19:59.000 "name": "key0", 00:19:59.000 "path": "/tmp/tmp.X9PJpj8rCT", 00:19:59.000 "method": "keyring_file_add_key", 00:19:59.000 "req_id": 1 00:19:59.000 } 00:19:59.000 Got JSON-RPC error response 00:19:59.000 response: 00:19:59.000 { 00:19:59.000 "code": -1, 00:19:59.000 "message": "Operation not permitted" 00:19:59.000 } 00:19:59.000 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.261 [2024-11-06 13:44:22.488352] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:59.261 [2024-11-06 13:44:22.488377] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:59.261 request: 00:19:59.261 { 00:19:59.261 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.261 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.261 "psk": "key0", 00:19:59.261 "method": "nvmf_subsystem_add_host", 00:19:59.261 "req_id": 1 00:19:59.261 } 00:19:59.261 Got JSON-RPC error response 00:19:59.261 response: 00:19:59.261 { 00:19:59.261 "code": -32603, 00:19:59.261 "message": "Internal error" 00:19:59.261 } 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 661254 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 661254 ']' 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 661254 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 661254 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 661254' 00:19:59.262 killing process with pid 661254 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 661254 00:19:59.262 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 661254 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.X9PJpj8rCT 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=661866 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 661866 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 661866 ']' 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:59.523 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.523 [2024-11-06 13:44:22.746100] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:19:59.523 [2024-11-06 13:44:22.746153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.523 [2024-11-06 13:44:22.837825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.523 [2024-11-06 13:44:22.867158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.523 [2024-11-06 13:44:22.867188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.523 [2024-11-06 13:44:22.867194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.523 [2024-11-06 13:44:22.867199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.523 [2024-11-06 13:44:22.867203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.523 [2024-11-06 13:44:22.867685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.X9PJpj8rCT 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.X9PJpj8rCT 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.466 [2024-11-06 13:44:23.719152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.466 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.727 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.727 [2024-11-06 13:44:24.043949] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.727 [2024-11-06 13:44:24.044152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.727 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.024 malloc0 00:20:01.024 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.024 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.X9PJpj8rCT 00:20:01.285 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=662236 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 662236 /var/tmp/bdevperf.sock 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 662236 ']' 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:01.545 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.546 [2024-11-06 13:44:24.759232] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:01.546 [2024-11-06 13:44:24.759286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662236 ] 00:20:01.546 [2024-11-06 13:44:24.817610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.546 [2024-11-06 13:44:24.846354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.807 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:01.807 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:01.807 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X9PJpj8rCT 00:20:01.807 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.068 [2024-11-06 13:44:25.259756] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.068 TLSTESTn1 00:20:02.068 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:02.329 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:02.329 "subsystems": [ 00:20:02.329 { 00:20:02.329 "subsystem": "keyring", 00:20:02.329 "config": [ 00:20:02.329 { 00:20:02.329 "method": "keyring_file_add_key", 00:20:02.329 "params": { 00:20:02.329 "name": "key0", 00:20:02.329 "path": "/tmp/tmp.X9PJpj8rCT" 00:20:02.329 } 00:20:02.329 } 00:20:02.329 ] 00:20:02.329 }, 00:20:02.329 { 00:20:02.329 "subsystem": "iobuf", 00:20:02.329 "config": [ 00:20:02.329 { 00:20:02.329 "method": "iobuf_set_options", 00:20:02.329 "params": { 00:20:02.329 "small_pool_count": 8192, 00:20:02.329 "large_pool_count": 1024, 00:20:02.329 "small_bufsize": 8192, 00:20:02.329 "large_bufsize": 135168, 00:20:02.329 "enable_numa": false 00:20:02.329 } 00:20:02.329 } 00:20:02.329 ] 00:20:02.329 }, 00:20:02.329 { 00:20:02.329 "subsystem": "sock", 00:20:02.329 "config": [ 00:20:02.329 { 00:20:02.329 "method": "sock_set_default_impl", 00:20:02.329 "params": { 00:20:02.329 "impl_name": "posix" 00:20:02.329 } 00:20:02.329 }, 00:20:02.329 { 00:20:02.329 "method": "sock_impl_set_options", 00:20:02.329 "params": { 00:20:02.329 "impl_name": "ssl", 00:20:02.329 "recv_buf_size": 4096, 00:20:02.329 "send_buf_size": 4096, 00:20:02.329 "enable_recv_pipe": true, 00:20:02.329 "enable_quickack": false, 00:20:02.330 "enable_placement_id": 0, 00:20:02.330 "enable_zerocopy_send_server": true, 00:20:02.330 "enable_zerocopy_send_client": false, 00:20:02.330 "zerocopy_threshold": 0, 00:20:02.330 "tls_version": 0, 00:20:02.330 "enable_ktls": false 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "sock_impl_set_options", 00:20:02.330 "params": { 00:20:02.330 "impl_name": "posix", 00:20:02.330 "recv_buf_size": 2097152, 00:20:02.330 "send_buf_size": 2097152, 00:20:02.330 "enable_recv_pipe": true, 00:20:02.330 "enable_quickack": false, 00:20:02.330 "enable_placement_id": 0, 00:20:02.330 "enable_zerocopy_send_server": true, 00:20:02.330 "enable_zerocopy_send_client": false, 00:20:02.330 "zerocopy_threshold": 0, 00:20:02.330 "tls_version": 0, 00:20:02.330 "enable_ktls": false 00:20:02.330 } 00:20:02.330 } 00:20:02.330 ] 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "subsystem": "vmd", 00:20:02.330 "config": [] 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "subsystem": "accel", 00:20:02.330 "config": [ 00:20:02.330 { 00:20:02.330 "method": "accel_set_options", 00:20:02.330 "params": { 00:20:02.330 "small_cache_size": 128, 00:20:02.330 "large_cache_size": 16, 00:20:02.330 "task_count": 2048, 00:20:02.330 "sequence_count": 2048, 00:20:02.330 "buf_count": 2048 00:20:02.330 } 00:20:02.330 } 00:20:02.330 ] 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "subsystem": "bdev", 00:20:02.330 "config": [ 00:20:02.330 { 00:20:02.330 "method": "bdev_set_options", 00:20:02.330 "params": { 00:20:02.330 "bdev_io_pool_size": 65535, 00:20:02.330 "bdev_io_cache_size": 256, 00:20:02.330 "bdev_auto_examine": true, 00:20:02.330 "iobuf_small_cache_size": 128, 00:20:02.330 "iobuf_large_cache_size": 16 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "bdev_raid_set_options", 00:20:02.330 "params": { 00:20:02.330 "process_window_size_kb": 1024, 00:20:02.330 "process_max_bandwidth_mb_sec": 0 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "bdev_iscsi_set_options", 00:20:02.330 "params": { 00:20:02.330 "timeout_sec": 30 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "bdev_nvme_set_options", 00:20:02.330 "params": { 00:20:02.330 "action_on_timeout": "none", 00:20:02.330 "timeout_us": 0, 00:20:02.330 "timeout_admin_us": 0, 00:20:02.330 "keep_alive_timeout_ms": 10000, 00:20:02.330 "arbitration_burst": 0, 00:20:02.330 "low_priority_weight": 0, 00:20:02.330 "medium_priority_weight": 0, 00:20:02.330 "high_priority_weight": 0, 00:20:02.330 "nvme_adminq_poll_period_us": 10000, 00:20:02.330 "nvme_ioq_poll_period_us": 0, 00:20:02.330 "io_queue_requests": 0, 00:20:02.330 "delay_cmd_submit": true, 00:20:02.330 "transport_retry_count": 4, 00:20:02.330 "bdev_retry_count": 3, 00:20:02.330 "transport_ack_timeout": 0, 00:20:02.330 "ctrlr_loss_timeout_sec": 0, 00:20:02.330 "reconnect_delay_sec": 0, 00:20:02.330 "fast_io_fail_timeout_sec": 0, 00:20:02.330 "disable_auto_failback": false, 00:20:02.330 "generate_uuids": false, 00:20:02.330 "transport_tos": 0, 00:20:02.330 "nvme_error_stat": false, 00:20:02.330 "rdma_srq_size": 0, 00:20:02.330 "io_path_stat": false, 00:20:02.330 "allow_accel_sequence": false, 00:20:02.330 "rdma_max_cq_size": 0, 00:20:02.330 "rdma_cm_event_timeout_ms": 0, 00:20:02.330 "dhchap_digests": [ 00:20:02.330 "sha256", 00:20:02.330 "sha384", 00:20:02.330 "sha512" 00:20:02.330 ], 00:20:02.330 "dhchap_dhgroups": [ 00:20:02.330 "null", 00:20:02.330 "ffdhe2048", 00:20:02.330 "ffdhe3072", 00:20:02.330 "ffdhe4096", 00:20:02.330 "ffdhe6144", 00:20:02.330 "ffdhe8192" 00:20:02.330 ] 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "bdev_nvme_set_hotplug", 00:20:02.330 "params": { 00:20:02.330 "period_us": 100000, 00:20:02.330 "enable": false 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "bdev_malloc_create", 00:20:02.330 "params": { 00:20:02.330 "name": "malloc0", 00:20:02.330 "num_blocks": 8192, 00:20:02.330 "block_size": 4096, 00:20:02.330 "physical_block_size": 4096, 00:20:02.330 "uuid": "e5e408a6-1d0f-4174-bdf8-39f1f8393f74", 00:20:02.330 "optimal_io_boundary": 0, 00:20:02.330 "md_size": 0, 00:20:02.330 "dif_type": 0, 00:20:02.330 "dif_is_head_of_md": false, 00:20:02.330 "dif_pi_format": 0 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "bdev_wait_for_examine" 00:20:02.330 } 00:20:02.330 ] 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "subsystem": "nbd", 00:20:02.330 "config": [] 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "subsystem": "scheduler", 00:20:02.330 "config": [ 00:20:02.330 { 00:20:02.330 "method": "framework_set_scheduler", 00:20:02.330 "params": { 00:20:02.330 "name": "static" 00:20:02.330 } 00:20:02.330 } 00:20:02.330 ] 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "subsystem": "nvmf", 00:20:02.330 "config": [ 00:20:02.330 { 00:20:02.330 "method": "nvmf_set_config", 00:20:02.330 "params": { 00:20:02.330 "discovery_filter": "match_any", 00:20:02.330 "admin_cmd_passthru": { 00:20:02.330 "identify_ctrlr": false 00:20:02.330 }, 00:20:02.330 "dhchap_digests": [ 00:20:02.330 "sha256", 00:20:02.330 "sha384", 00:20:02.330 "sha512" 00:20:02.330 ], 00:20:02.330 "dhchap_dhgroups": [ 00:20:02.330 "null", 00:20:02.330 "ffdhe2048", 00:20:02.330 "ffdhe3072", 00:20:02.330 "ffdhe4096", 00:20:02.330 "ffdhe6144", 00:20:02.330 "ffdhe8192" 00:20:02.330 ] 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "nvmf_set_max_subsystems", 00:20:02.330 "params": { 00:20:02.330 "max_subsystems": 1024 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "nvmf_set_crdt", 00:20:02.330 "params": { 00:20:02.330 "crdt1": 0, 00:20:02.330 "crdt2": 0, 00:20:02.330 "crdt3": 0 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "nvmf_create_transport", 00:20:02.330 "params": { 00:20:02.330 "trtype": "TCP", 00:20:02.330 "max_queue_depth": 128, 00:20:02.330 "max_io_qpairs_per_ctrlr": 127, 00:20:02.330 "in_capsule_data_size": 4096, 00:20:02.330 "max_io_size": 131072, 00:20:02.330 "io_unit_size": 131072, 00:20:02.330 "max_aq_depth": 128, 00:20:02.330 "num_shared_buffers": 511, 00:20:02.330 "buf_cache_size": 4294967295, 00:20:02.330 "dif_insert_or_strip": false, 00:20:02.330 "zcopy": false, 00:20:02.330 "c2h_success": false, 00:20:02.330 "sock_priority": 0, 00:20:02.330 "abort_timeout_sec": 1, 00:20:02.330 "ack_timeout": 0, 00:20:02.330 "data_wr_pool_size": 0 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "nvmf_create_subsystem", 00:20:02.330 "params": { 00:20:02.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.330 "allow_any_host": false, 00:20:02.330 "serial_number": "SPDK00000000000001", 00:20:02.330 "model_number": "SPDK bdev Controller", 00:20:02.330 "max_namespaces": 10, 00:20:02.330 "min_cntlid": 1, 00:20:02.330 "max_cntlid": 65519, 00:20:02.330 "ana_reporting": false 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "nvmf_subsystem_add_host", 00:20:02.330 "params": { 00:20:02.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.330 "host": "nqn.2016-06.io.spdk:host1", 00:20:02.330 "psk": "key0" 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "nvmf_subsystem_add_ns", 00:20:02.330 "params": { 00:20:02.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.330 "namespace": { 00:20:02.330 "nsid": 1, 00:20:02.330 "bdev_name": "malloc0", 00:20:02.330 "nguid": "E5E408A61D0F4174BDF839F1F8393F74", 00:20:02.330 "uuid": "e5e408a6-1d0f-4174-bdf8-39f1f8393f74", 00:20:02.330 "no_auto_visible": false 00:20:02.330 } 00:20:02.330 } 00:20:02.330 }, 00:20:02.330 { 00:20:02.330 "method": "nvmf_subsystem_add_listener", 00:20:02.330 "params": { 00:20:02.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.330 "listen_address": { 00:20:02.330 "trtype": "TCP", 00:20:02.330 "adrfam": "IPv4", 00:20:02.330 "traddr": "10.0.0.2", 00:20:02.330 "trsvcid": "4420" 00:20:02.330 }, 00:20:02.330 "secure_channel": true 00:20:02.330 } 00:20:02.330 } 00:20:02.330 ] 00:20:02.330 } 00:20:02.330 ] 00:20:02.330 }' 00:20:02.330 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:02.592 "subsystems": [ 00:20:02.592 { 00:20:02.592 "subsystem": "keyring", 00:20:02.592 "config": [ 00:20:02.592 { 00:20:02.592 "method": "keyring_file_add_key", 00:20:02.592 "params": { 00:20:02.592 "name": "key0", 00:20:02.592 "path": "/tmp/tmp.X9PJpj8rCT" 00:20:02.592 } 00:20:02.592 } 00:20:02.592 ] 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "subsystem": "iobuf", 00:20:02.592 "config": [ 00:20:02.592 { 00:20:02.592 "method": "iobuf_set_options", 00:20:02.592 "params": { 00:20:02.592 "small_pool_count": 8192, 00:20:02.592 "large_pool_count": 1024, 00:20:02.592 "small_bufsize": 8192, 00:20:02.592 "large_bufsize": 135168, 00:20:02.592 "enable_numa": false 00:20:02.592 } 00:20:02.592 } 00:20:02.592 ] 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "subsystem": "sock", 00:20:02.592 "config": [ 00:20:02.592 { 00:20:02.592 "method": "sock_set_default_impl", 00:20:02.592 "params": { 00:20:02.592 "impl_name": "posix" 00:20:02.592 } 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "method": "sock_impl_set_options", 00:20:02.592 "params": { 00:20:02.592 "impl_name": "ssl", 00:20:02.592 "recv_buf_size": 4096, 00:20:02.592 "send_buf_size": 4096, 00:20:02.592 "enable_recv_pipe": true, 00:20:02.592 "enable_quickack": false, 00:20:02.592 "enable_placement_id": 0, 00:20:02.592 "enable_zerocopy_send_server": true, 00:20:02.592 "enable_zerocopy_send_client": false, 00:20:02.592 "zerocopy_threshold": 0, 00:20:02.592 "tls_version": 0, 00:20:02.592 "enable_ktls": false 00:20:02.592 } 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "method": "sock_impl_set_options", 00:20:02.592 "params": { 00:20:02.592 "impl_name": "posix", 00:20:02.592 "recv_buf_size": 2097152, 00:20:02.592 "send_buf_size": 2097152, 00:20:02.592 "enable_recv_pipe": true, 00:20:02.592 "enable_quickack": false, 00:20:02.592 "enable_placement_id": 0, 00:20:02.592 "enable_zerocopy_send_server": true, 00:20:02.592 "enable_zerocopy_send_client": false, 00:20:02.592 "zerocopy_threshold": 0, 00:20:02.592 "tls_version": 0, 00:20:02.592 "enable_ktls": false 00:20:02.592 } 00:20:02.592 } 00:20:02.592 ] 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "subsystem": "vmd", 00:20:02.592 "config": [] 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "subsystem": "accel", 00:20:02.592 "config": [ 00:20:02.592 { 00:20:02.592 "method": "accel_set_options", 00:20:02.592 "params": { 00:20:02.592 "small_cache_size": 128, 00:20:02.592 "large_cache_size": 16, 00:20:02.592 "task_count": 2048, 00:20:02.592 "sequence_count": 2048, 00:20:02.592 "buf_count": 2048 00:20:02.592 } 00:20:02.592 } 00:20:02.592 ] 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "subsystem": "bdev", 00:20:02.592 "config": [ 00:20:02.592 { 00:20:02.592 "method": "bdev_set_options", 00:20:02.592 "params": { 00:20:02.592 "bdev_io_pool_size": 65535, 00:20:02.592 "bdev_io_cache_size": 256, 00:20:02.592 "bdev_auto_examine": true, 00:20:02.592 "iobuf_small_cache_size": 128, 00:20:02.592 "iobuf_large_cache_size": 16 00:20:02.592 } 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "method": "bdev_raid_set_options", 00:20:02.592 "params": { 00:20:02.592 "process_window_size_kb": 1024, 00:20:02.592 "process_max_bandwidth_mb_sec": 0 00:20:02.592 } 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "method": "bdev_iscsi_set_options", 00:20:02.592 "params": { 00:20:02.592 "timeout_sec": 30 00:20:02.592 } 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "method": "bdev_nvme_set_options", 00:20:02.592 "params": { 00:20:02.592 "action_on_timeout": "none", 00:20:02.592 "timeout_us": 0, 00:20:02.592 "timeout_admin_us": 0, 00:20:02.592 "keep_alive_timeout_ms": 10000, 00:20:02.592 "arbitration_burst": 0, 00:20:02.592 "low_priority_weight": 0, 00:20:02.592 "medium_priority_weight": 0, 00:20:02.592 "high_priority_weight": 0, 00:20:02.592 "nvme_adminq_poll_period_us": 10000, 00:20:02.592 "nvme_ioq_poll_period_us": 0, 00:20:02.592 "io_queue_requests": 512, 00:20:02.592 "delay_cmd_submit": true, 00:20:02.592 "transport_retry_count": 4, 00:20:02.592 "bdev_retry_count": 3, 00:20:02.592 "transport_ack_timeout": 0, 00:20:02.592 "ctrlr_loss_timeout_sec": 0, 00:20:02.592 "reconnect_delay_sec": 0, 00:20:02.592 "fast_io_fail_timeout_sec": 0, 00:20:02.592 "disable_auto_failback": false, 00:20:02.592 "generate_uuids": false, 00:20:02.592 "transport_tos": 0, 00:20:02.592 "nvme_error_stat": false, 00:20:02.592 "rdma_srq_size": 0, 00:20:02.592 "io_path_stat": false, 00:20:02.592 "allow_accel_sequence": false, 00:20:02.592 "rdma_max_cq_size": 0, 00:20:02.592 "rdma_cm_event_timeout_ms": 0, 00:20:02.592 "dhchap_digests": [ 00:20:02.592 "sha256", 00:20:02.592 "sha384", 00:20:02.592 "sha512" 00:20:02.592 ], 00:20:02.592 "dhchap_dhgroups": [ 00:20:02.592 "null", 00:20:02.592 "ffdhe2048", 00:20:02.592 "ffdhe3072", 00:20:02.592 "ffdhe4096", 00:20:02.592 "ffdhe6144", 00:20:02.592 "ffdhe8192" 00:20:02.592 ] 00:20:02.592 } 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "method": "bdev_nvme_attach_controller", 00:20:02.592 "params": { 00:20:02.592 "name": "TLSTEST", 00:20:02.592 "trtype": "TCP", 00:20:02.592 "adrfam": "IPv4", 00:20:02.592 "traddr": "10.0.0.2", 00:20:02.592 "trsvcid": "4420", 00:20:02.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.592 "prchk_reftag": false, 00:20:02.592 "prchk_guard": false, 00:20:02.592 "ctrlr_loss_timeout_sec": 0, 00:20:02.592 "reconnect_delay_sec": 0, 00:20:02.592 "fast_io_fail_timeout_sec": 0, 00:20:02.592 "psk": "key0", 00:20:02.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.592 "hdgst": false, 00:20:02.592 "ddgst": false, 00:20:02.592 "multipath": "multipath" 00:20:02.592 } 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "method": "bdev_nvme_set_hotplug", 00:20:02.592 "params": { 00:20:02.592 "period_us": 100000, 00:20:02.592 "enable": false 00:20:02.592 } 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "method": "bdev_wait_for_examine" 00:20:02.592 } 00:20:02.592 ] 00:20:02.592 }, 00:20:02.592 { 00:20:02.592 "subsystem": "nbd", 00:20:02.592 "config": [] 00:20:02.592 } 00:20:02.592 ] 00:20:02.592 }' 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 662236 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 662236 ']' 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 662236 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 662236 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 662236' 00:20:02.592 killing process with pid 662236 00:20:02.592 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 662236 00:20:02.592 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.592 00:20:02.593 Latency(us) 00:20:02.593 [2024-11-06T12:44:25.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.593 [2024-11-06T12:44:25.969Z] =================================================================================================================== 00:20:02.593 [2024-11-06T12:44:25.969Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:02.593 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 662236 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 661866 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 661866 ']' 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 661866 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 661866 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 661866' 00:20:02.854 killing process with pid 661866 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 661866 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 661866 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:02.854 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:02.854 "subsystems": [ 00:20:02.854 { 00:20:02.854 "subsystem": "keyring", 00:20:02.854 "config": [ 00:20:02.854 { 00:20:02.854 "method": "keyring_file_add_key", 00:20:02.854 "params": { 00:20:02.854 "name": "key0", 00:20:02.854 "path": "/tmp/tmp.X9PJpj8rCT" 00:20:02.854 } 00:20:02.854 } 00:20:02.854 ] 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "subsystem": "iobuf", 00:20:02.854 "config": [ 00:20:02.854 { 00:20:02.854 "method": "iobuf_set_options", 00:20:02.854 "params": { 00:20:02.854 "small_pool_count": 8192, 00:20:02.854 "large_pool_count": 1024, 00:20:02.854 "small_bufsize": 8192, 00:20:02.854 "large_bufsize": 135168, 00:20:02.854 "enable_numa": false 00:20:02.854 } 00:20:02.854 } 00:20:02.854 ] 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "subsystem": "sock", 00:20:02.854 "config": [ 00:20:02.854 { 00:20:02.854 "method": "sock_set_default_impl", 00:20:02.854 "params": { 00:20:02.854 "impl_name": "posix" 00:20:02.854 } 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "method": "sock_impl_set_options", 00:20:02.854 "params": { 00:20:02.854 "impl_name": "ssl", 00:20:02.854 "recv_buf_size": 4096, 00:20:02.854 "send_buf_size": 4096, 00:20:02.854 "enable_recv_pipe": true, 00:20:02.854 "enable_quickack": false, 00:20:02.854 "enable_placement_id": 0, 00:20:02.854 "enable_zerocopy_send_server": true, 00:20:02.854 "enable_zerocopy_send_client": false, 00:20:02.854 "zerocopy_threshold": 0, 00:20:02.854 "tls_version": 0, 00:20:02.854 "enable_ktls": false 00:20:02.854 } 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "method": "sock_impl_set_options", 00:20:02.854 "params": { 00:20:02.854 "impl_name": "posix", 00:20:02.854 "recv_buf_size": 2097152, 00:20:02.854 "send_buf_size": 2097152, 00:20:02.854 "enable_recv_pipe": true, 00:20:02.854 "enable_quickack": false, 00:20:02.854 "enable_placement_id": 0, 00:20:02.854 "enable_zerocopy_send_server": true, 00:20:02.854 "enable_zerocopy_send_client": false, 00:20:02.854 "zerocopy_threshold": 0, 00:20:02.854 "tls_version": 0, 00:20:02.854 "enable_ktls": false 00:20:02.854 } 00:20:02.854 } 00:20:02.854 ] 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "subsystem": "vmd", 00:20:02.854 "config": [] 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "subsystem": "accel", 00:20:02.854 "config": [ 00:20:02.854 { 00:20:02.854 "method": "accel_set_options", 00:20:02.854 "params": { 00:20:02.854 "small_cache_size": 128, 00:20:02.854 "large_cache_size": 16, 00:20:02.854 "task_count": 2048, 00:20:02.854 "sequence_count": 2048, 00:20:02.854 "buf_count": 2048 00:20:02.854 } 00:20:02.854 } 00:20:02.854 ] 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "subsystem": "bdev", 00:20:02.854 "config": [ 00:20:02.854 { 00:20:02.854 "method": "bdev_set_options", 00:20:02.854 "params": { 00:20:02.854 "bdev_io_pool_size": 65535, 00:20:02.854 "bdev_io_cache_size": 256, 00:20:02.854 "bdev_auto_examine": true, 00:20:02.854 "iobuf_small_cache_size": 128, 00:20:02.854 "iobuf_large_cache_size": 16 00:20:02.854 } 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "method": "bdev_raid_set_options", 00:20:02.854 "params": { 00:20:02.854 "process_window_size_kb": 1024, 00:20:02.854 "process_max_bandwidth_mb_sec": 0 00:20:02.854 } 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "method": "bdev_iscsi_set_options", 00:20:02.854 "params": { 00:20:02.854 "timeout_sec": 30 00:20:02.854 } 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "method": "bdev_nvme_set_options", 00:20:02.854 "params": { 00:20:02.854 "action_on_timeout": "none", 00:20:02.854 "timeout_us": 0, 00:20:02.854 "timeout_admin_us": 0, 00:20:02.854 "keep_alive_timeout_ms": 10000, 00:20:02.854 "arbitration_burst": 0, 00:20:02.854 "low_priority_weight": 0, 00:20:02.854 "medium_priority_weight": 0, 00:20:02.854 "high_priority_weight": 0, 00:20:02.854 "nvme_adminq_poll_period_us": 10000, 00:20:02.854 "nvme_ioq_poll_period_us": 0, 00:20:02.854 "io_queue_requests": 0, 00:20:02.854 "delay_cmd_submit": true, 00:20:02.854 "transport_retry_count": 4, 00:20:02.854 "bdev_retry_count": 3, 00:20:02.854 "transport_ack_timeout": 0, 00:20:02.854 "ctrlr_loss_timeout_sec": 0, 00:20:02.854 "reconnect_delay_sec": 0, 00:20:02.854 "fast_io_fail_timeout_sec": 0, 00:20:02.854 "disable_auto_failback": false, 00:20:02.854 "generate_uuids": false, 00:20:02.854 "transport_tos": 0, 00:20:02.854 "nvme_error_stat": false, 00:20:02.854 "rdma_srq_size": 0, 00:20:02.854 "io_path_stat": false, 00:20:02.854 "allow_accel_sequence": false, 00:20:02.854 "rdma_max_cq_size": 0, 00:20:02.854 "rdma_cm_event_timeout_ms": 0, 00:20:02.854 "dhchap_digests": [ 00:20:02.854 "sha256", 00:20:02.854 "sha384", 00:20:02.854 "sha512" 00:20:02.854 ], 00:20:02.854 "dhchap_dhgroups": [ 00:20:02.854 "null", 00:20:02.854 "ffdhe2048", 00:20:02.854 "ffdhe3072", 00:20:02.854 "ffdhe4096", 00:20:02.854 "ffdhe6144", 00:20:02.854 "ffdhe8192" 00:20:02.854 ] 00:20:02.854 } 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "method": "bdev_nvme_set_hotplug", 00:20:02.854 "params": { 00:20:02.854 "period_us": 100000, 00:20:02.854 "enable": false 00:20:02.854 } 00:20:02.854 }, 00:20:02.854 { 00:20:02.854 "method": "bdev_malloc_create", 00:20:02.855 "params": { 00:20:02.855 "name": "malloc0", 00:20:02.855 "num_blocks": 8192, 00:20:02.855 "block_size": 4096, 00:20:02.855 "physical_block_size": 4096, 00:20:02.855 "uuid": "e5e408a6-1d0f-4174-bdf8-39f1f8393f74", 00:20:02.855 "optimal_io_boundary": 0, 00:20:02.855 "md_size": 0, 00:20:02.855 "dif_type": 0, 00:20:02.855 "dif_is_head_of_md": false, 00:20:02.855 "dif_pi_format": 0 00:20:02.855 } 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "method": "bdev_wait_for_examine" 00:20:02.855 } 00:20:02.855 ] 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "subsystem": "nbd", 00:20:02.855 "config": [] 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "subsystem": "scheduler", 00:20:02.855 "config": [ 00:20:02.855 { 00:20:02.855 "method": "framework_set_scheduler", 00:20:02.855 "params": { 00:20:02.855 "name": "static" 00:20:02.855 } 00:20:02.855 } 00:20:02.855 ] 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "subsystem": "nvmf", 00:20:02.855 "config": [ 00:20:02.855 { 00:20:02.855 "method": "nvmf_set_config", 00:20:02.855 "params": { 00:20:02.855 "discovery_filter": "match_any", 00:20:02.855 "admin_cmd_passthru": { 00:20:02.855 "identify_ctrlr": false 00:20:02.855 }, 00:20:02.855 "dhchap_digests": [ 00:20:02.855 "sha256", 00:20:02.855 "sha384", 00:20:02.855 "sha512" 00:20:02.855 ], 00:20:02.855 "dhchap_dhgroups": [ 00:20:02.855 "null", 00:20:02.855 "ffdhe2048", 00:20:02.855 "ffdhe3072", 00:20:02.855 "ffdhe4096", 00:20:02.855 "ffdhe6144", 00:20:02.855 "ffdhe8192" 00:20:02.855 ] 00:20:02.855 } 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "method": "nvmf_set_max_subsystems", 00:20:02.855 "params": { 00:20:02.855 "max_subsystems": 1024 00:20:02.855 } 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "method": "nvmf_set_crdt", 00:20:02.855 "params": { 00:20:02.855 "crdt1": 0, 00:20:02.855 "crdt2": 0, 00:20:02.855 "crdt3": 0 00:20:02.855 } 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "method": "nvmf_create_transport", 00:20:02.855 "params": { 00:20:02.855 "trtype": "TCP", 00:20:02.855 "max_queue_depth": 128, 00:20:02.855 "max_io_qpairs_per_ctrlr": 127, 00:20:02.855 "in_capsule_data_size": 4096, 00:20:02.855 "max_io_size": 131072, 00:20:02.855 "io_unit_size": 131072, 00:20:02.855 "max_aq_depth": 128, 00:20:02.855 "num_shared_buffers": 511, 00:20:02.855 "buf_cache_size": 4294967295, 00:20:02.855 "dif_insert_or_strip": false, 00:20:02.855 "zcopy": false, 00:20:02.855 "c2h_success": false, 00:20:02.855 "sock_priority": 0, 00:20:02.855 "abort_timeout_sec": 1, 00:20:02.855 "ack_timeout": 0, 00:20:02.855 "data_wr_pool_size": 0 00:20:02.855 } 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "method": "nvmf_create_subsystem", 00:20:02.855 "params": { 00:20:02.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.855 "allow_any_host": false, 00:20:02.855 "serial_number": "SPDK00000000000001", 00:20:02.855 "model_number": "SPDK bdev Controller", 00:20:02.855 "max_namespaces": 10, 00:20:02.855 "min_cntlid": 1, 00:20:02.855 "max_cntlid": 65519, 00:20:02.855 "ana_reporting": false 00:20:02.855 } 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "method": "nvmf_subsystem_add_host", 00:20:02.855 "params": { 00:20:02.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.855 "host": "nqn.2016-06.io.spdk:host1", 00:20:02.855 "psk": "key0" 00:20:02.855 } 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "method": "nvmf_subsystem_add_ns", 00:20:02.855 "params": { 00:20:02.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.855 "namespace": { 00:20:02.855 "nsid": 1, 00:20:02.855 "bdev_name": "malloc0", 00:20:02.855 "nguid": "E5E408A61D0F4174BDF839F1F8393F74", 00:20:02.855 "uuid": "e5e408a6-1d0f-4174-bdf8-39f1f8393f74", 00:20:02.855 "no_auto_visible": false 00:20:02.855 } 00:20:02.855 } 00:20:02.855 }, 00:20:02.855 { 00:20:02.855 "method": "nvmf_subsystem_add_listener", 00:20:02.855 "params": { 00:20:02.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.855 "listen_address": { 00:20:02.855 "trtype": "TCP", 00:20:02.855 "adrfam": "IPv4", 00:20:02.855 "traddr": "10.0.0.2", 00:20:02.855 "trsvcid": "4420" 00:20:02.855 }, 00:20:02.855 "secure_channel": true 00:20:02.855 } 00:20:02.855 } 00:20:02.855 ] 00:20:02.855 } 00:20:02.855 ] 00:20:02.855 }' 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=662584 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 662584 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 662584 ']' 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:02.855 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.116 [2024-11-06 13:44:26.252927] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:03.116 [2024-11-06 13:44:26.252982] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.116 [2024-11-06 13:44:26.320463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.116 [2024-11-06 13:44:26.348018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.116 [2024-11-06 13:44:26.348047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.116 [2024-11-06 13:44:26.348053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.116 [2024-11-06 13:44:26.348057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.116 [2024-11-06 13:44:26.348062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.116 [2024-11-06 13:44:26.348535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.377 [2024-11-06 13:44:26.540908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.377 [2024-11-06 13:44:26.572933] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.377 [2024-11-06 13:44:26.573149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.948 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:03.948 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:03.948 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=662616 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 662616 /var/tmp/bdevperf.sock 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 662616 ']' 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.949 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:03.949 "subsystems": [ 00:20:03.949 { 00:20:03.949 "subsystem": "keyring", 00:20:03.949 "config": [ 00:20:03.949 { 00:20:03.949 "method": "keyring_file_add_key", 00:20:03.949 "params": { 00:20:03.949 "name": "key0", 00:20:03.949 "path": "/tmp/tmp.X9PJpj8rCT" 00:20:03.949 } 00:20:03.949 } 00:20:03.949 ] 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "subsystem": "iobuf", 00:20:03.949 "config": [ 00:20:03.949 { 00:20:03.949 "method": "iobuf_set_options", 00:20:03.949 "params": { 00:20:03.949 "small_pool_count": 8192, 00:20:03.949 "large_pool_count": 1024, 00:20:03.949 "small_bufsize": 8192, 00:20:03.949 "large_bufsize": 135168, 00:20:03.949 "enable_numa": false 00:20:03.949 } 00:20:03.949 } 00:20:03.949 ] 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "subsystem": "sock", 00:20:03.949 "config": [ 00:20:03.949 { 00:20:03.949 "method": "sock_set_default_impl", 00:20:03.949 "params": { 00:20:03.949 "impl_name": "posix" 00:20:03.949 } 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "method": "sock_impl_set_options", 00:20:03.949 "params": { 00:20:03.949 "impl_name": "ssl", 00:20:03.949 "recv_buf_size": 4096, 00:20:03.949 "send_buf_size": 4096, 00:20:03.949 "enable_recv_pipe": true, 00:20:03.949 "enable_quickack": false, 00:20:03.949 "enable_placement_id": 0, 00:20:03.949 "enable_zerocopy_send_server": true, 00:20:03.949 "enable_zerocopy_send_client": false, 00:20:03.949 "zerocopy_threshold": 0, 00:20:03.949 "tls_version": 0, 00:20:03.949 "enable_ktls": false 00:20:03.949 } 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "method": "sock_impl_set_options", 00:20:03.949 "params": { 00:20:03.949 "impl_name": "posix", 00:20:03.949 "recv_buf_size": 2097152, 00:20:03.949 "send_buf_size": 2097152, 00:20:03.949 "enable_recv_pipe": true, 00:20:03.949 "enable_quickack": false, 00:20:03.949 "enable_placement_id": 0, 00:20:03.949 "enable_zerocopy_send_server": true, 00:20:03.949 "enable_zerocopy_send_client": false, 00:20:03.949 "zerocopy_threshold": 0, 00:20:03.949 "tls_version": 0, 00:20:03.949 "enable_ktls": false 00:20:03.949 } 00:20:03.949 } 00:20:03.949 ] 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "subsystem": "vmd", 00:20:03.949 "config": [] 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "subsystem": "accel", 00:20:03.949 "config": [ 00:20:03.949 { 00:20:03.949 "method": "accel_set_options", 00:20:03.949 "params": { 00:20:03.949 "small_cache_size": 128, 00:20:03.949 "large_cache_size": 16, 00:20:03.949 "task_count": 2048, 00:20:03.949 "sequence_count": 2048, 00:20:03.949 "buf_count": 2048 00:20:03.949 } 00:20:03.949 } 00:20:03.949 ] 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "subsystem": "bdev", 00:20:03.949 "config": [ 00:20:03.949 { 00:20:03.949 "method": "bdev_set_options", 00:20:03.949 "params": { 00:20:03.949 "bdev_io_pool_size": 65535, 00:20:03.949 "bdev_io_cache_size": 256, 00:20:03.949 "bdev_auto_examine": true, 00:20:03.949 "iobuf_small_cache_size": 128, 00:20:03.949 "iobuf_large_cache_size": 16 00:20:03.949 } 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "method": "bdev_raid_set_options", 00:20:03.949 "params": { 00:20:03.949 "process_window_size_kb": 1024, 00:20:03.949 "process_max_bandwidth_mb_sec": 0 00:20:03.949 } 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "method": "bdev_iscsi_set_options", 00:20:03.949 "params": { 00:20:03.949 "timeout_sec": 30 00:20:03.949 } 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "method": "bdev_nvme_set_options", 00:20:03.949 "params": { 00:20:03.949 "action_on_timeout": "none", 00:20:03.949 "timeout_us": 0, 00:20:03.949 "timeout_admin_us": 0, 00:20:03.949 "keep_alive_timeout_ms": 10000, 00:20:03.949 "arbitration_burst": 0, 00:20:03.949 "low_priority_weight": 0, 00:20:03.949 "medium_priority_weight": 0, 00:20:03.949 "high_priority_weight": 0, 00:20:03.949 "nvme_adminq_poll_period_us": 10000, 00:20:03.949 "nvme_ioq_poll_period_us": 0, 00:20:03.949 "io_queue_requests": 512, 00:20:03.949 "delay_cmd_submit": true, 00:20:03.949 "transport_retry_count": 4, 00:20:03.949 "bdev_retry_count": 3, 00:20:03.949 "transport_ack_timeout": 0, 00:20:03.949 "ctrlr_loss_timeout_sec": 0, 00:20:03.949 "reconnect_delay_sec": 0, 00:20:03.949 "fast_io_fail_timeout_sec": 0, 00:20:03.949 "disable_auto_failback": false, 00:20:03.949 "generate_uuids": false, 00:20:03.949 "transport_tos": 0, 00:20:03.949 "nvme_error_stat": false, 00:20:03.949 "rdma_srq_size": 0, 00:20:03.949 "io_path_stat": false, 00:20:03.949 "allow_accel_sequence": false, 00:20:03.949 "rdma_max_cq_size": 0, 00:20:03.949 "rdma_cm_event_timeout_ms": 0, 00:20:03.949 "dhchap_digests": [ 00:20:03.949 "sha256", 00:20:03.949 "sha384", 00:20:03.949 "sha512" 00:20:03.949 ], 00:20:03.949 "dhchap_dhgroups": [ 00:20:03.949 "null", 00:20:03.949 "ffdhe2048", 00:20:03.949 "ffdhe3072", 00:20:03.949 "ffdhe4096", 00:20:03.949 "ffdhe6144", 00:20:03.949 "ffdhe8192" 00:20:03.949 ] 00:20:03.949 } 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "method": "bdev_nvme_attach_controller", 00:20:03.949 "params": { 00:20:03.949 "name": "TLSTEST", 00:20:03.949 "trtype": "TCP", 00:20:03.949 "adrfam": "IPv4", 00:20:03.949 "traddr": "10.0.0.2", 00:20:03.949 "trsvcid": "4420", 00:20:03.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.949 "prchk_reftag": false, 00:20:03.949 "prchk_guard": false, 00:20:03.949 "ctrlr_loss_timeout_sec": 0, 00:20:03.949 "reconnect_delay_sec": 0, 00:20:03.949 "fast_io_fail_timeout_sec": 0, 00:20:03.949 "psk": "key0", 00:20:03.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.949 "hdgst": false, 00:20:03.949 "ddgst": false, 00:20:03.949 "multipath": "multipath" 00:20:03.949 } 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "method": "bdev_nvme_set_hotplug", 00:20:03.949 "params": { 00:20:03.949 "period_us": 100000, 00:20:03.949 "enable": false 00:20:03.949 } 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "method": "bdev_wait_for_examine" 00:20:03.949 } 00:20:03.949 ] 00:20:03.949 }, 00:20:03.949 { 00:20:03.949 "subsystem": "nbd", 00:20:03.949 "config": [] 00:20:03.949 } 00:20:03.949 ] 00:20:03.949 }' 00:20:03.949 [2024-11-06 13:44:27.124807] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:03.949 [2024-11-06 13:44:27.124859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662616 ] 00:20:03.949 [2024-11-06 13:44:27.183978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.949 [2024-11-06 13:44:27.213096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.210 [2024-11-06 13:44:27.347138] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.782 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:04.782 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:04.782 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:04.782 Running I/O for 10 seconds... 00:20:06.665 6119.00 IOPS, 23.90 MiB/s [2024-11-06T12:44:31.426Z] 6318.50 IOPS, 24.68 MiB/s [2024-11-06T12:44:32.367Z] 6434.00 IOPS, 25.13 MiB/s [2024-11-06T12:44:33.310Z] 6398.75 IOPS, 25.00 MiB/s [2024-11-06T12:44:34.251Z] 6447.60 IOPS, 25.19 MiB/s [2024-11-06T12:44:35.194Z] 6493.17 IOPS, 25.36 MiB/s [2024-11-06T12:44:36.135Z] 6467.71 IOPS, 25.26 MiB/s [2024-11-06T12:44:37.076Z] 6496.12 IOPS, 25.38 MiB/s [2024-11-06T12:44:38.459Z] 6491.33 IOPS, 25.36 MiB/s [2024-11-06T12:44:38.459Z] 6485.80 IOPS, 25.34 MiB/s 00:20:15.083 Latency(us) 00:20:15.083 [2024-11-06T12:44:38.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.083 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.083 Verification LBA range: start 0x0 length 0x2000 00:20:15.083 TLSTESTn1 : 10.01 6490.93 25.36 0.00 0.00 19691.15 4696.75 23702.19 00:20:15.083 [2024-11-06T12:44:38.460Z] =================================================================================================================== 00:20:15.084 [2024-11-06T12:44:38.460Z] Total : 6490.93 25.36 0.00 0.00 19691.15 4696.75 23702.19 00:20:15.084 { 00:20:15.084 "results": [ 00:20:15.084 { 00:20:15.084 "job": "TLSTESTn1", 00:20:15.084 "core_mask": "0x4", 00:20:15.084 "workload": "verify", 00:20:15.084 "status": "finished", 00:20:15.084 "verify_range": { 00:20:15.084 "start": 0, 00:20:15.084 "length": 8192 00:20:15.084 }, 00:20:15.084 "queue_depth": 128, 00:20:15.084 "io_size": 4096, 00:20:15.084 "runtime": 10.01167, 00:20:15.084 "iops": 6490.9250904194805, 00:20:15.084 "mibps": 25.355176134451096, 00:20:15.084 "io_failed": 0, 00:20:15.084 "io_timeout": 0, 00:20:15.084 "avg_latency_us": 19691.152482983252, 00:20:15.084 "min_latency_us": 4696.746666666667, 00:20:15.084 "max_latency_us": 23702.18666666667 00:20:15.084 } 00:20:15.084 ], 00:20:15.084 "core_count": 1 00:20:15.084 } 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 662616 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 662616 ']' 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 662616 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 662616 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 662616' 00:20:15.084 killing process with pid 662616 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 662616 00:20:15.084 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.084 00:20:15.084 Latency(us) 00:20:15.084 [2024-11-06T12:44:38.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.084 [2024-11-06T12:44:38.460Z] =================================================================================================================== 00:20:15.084 [2024-11-06T12:44:38.460Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 662616 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 662584 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 662584 ']' 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 662584 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 662584 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 662584' 00:20:15.084 killing process with pid 662584 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 662584 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 662584 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=664958 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 664958 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 664958 ']' 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.084 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:15.345 [2024-11-06 13:44:38.466234] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:15.345 [2024-11-06 13:44:38.466290] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.345 [2024-11-06 13:44:38.542297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.345 [2024-11-06 13:44:38.575971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.345 [2024-11-06 13:44:38.576004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.345 [2024-11-06 13:44:38.576012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.345 [2024-11-06 13:44:38.576018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.345 [2024-11-06 13:44:38.576024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.345 [2024-11-06 13:44:38.576584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.915 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:15.915 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:15.915 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:15.915 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.915 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.915 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.915 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.X9PJpj8rCT 00:20:15.915 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.X9PJpj8rCT 00:20:15.915 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:16.175 [2024-11-06 13:44:39.416752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.175 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:16.435 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:16.435 [2024-11-06 13:44:39.785683] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.435 [2024-11-06 13:44:39.785931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.696 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:16.696 malloc0 00:20:16.696 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:16.957 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.X9PJpj8rCT 00:20:17.256 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=665325 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 665325 /var/tmp/bdevperf.sock 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 665325 ']' 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:17.257 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.549 [2024-11-06 13:44:40.601217] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:17.549 [2024-11-06 13:44:40.601271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665325 ] 00:20:17.549 [2024-11-06 13:44:40.684433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.549 [2024-11-06 13:44:40.713869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.237 13:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:18.237 13:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:18.237 13:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X9PJpj8rCT 00:20:18.237 13:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:18.505 [2024-11-06 13:44:41.709310] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.505 nvme0n1 00:20:18.505 13:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.766 Running I/O for 1 seconds... 00:20:19.770 5226.00 IOPS, 20.41 MiB/s 00:20:19.770 Latency(us) 00:20:19.770 [2024-11-06T12:44:43.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.770 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.770 Verification LBA range: start 0x0 length 0x2000 00:20:19.770 nvme0n1 : 1.02 5267.43 20.58 0.00 0.00 24092.74 6799.36 65536.00 00:20:19.770 [2024-11-06T12:44:43.146Z] =================================================================================================================== 00:20:19.770 [2024-11-06T12:44:43.146Z] Total : 5267.43 20.58 0.00 0.00 24092.74 6799.36 65536.00 00:20:19.770 { 00:20:19.770 "results": [ 00:20:19.770 { 00:20:19.770 "job": "nvme0n1", 00:20:19.770 "core_mask": "0x2", 00:20:19.770 "workload": "verify", 00:20:19.770 "status": "finished", 00:20:19.770 "verify_range": { 00:20:19.770 "start": 0, 00:20:19.770 "length": 8192 00:20:19.770 }, 00:20:19.770 "queue_depth": 128, 00:20:19.770 "io_size": 4096, 00:20:19.770 "runtime": 1.016435, 00:20:19.770 "iops": 5267.429791378691, 00:20:19.770 "mibps": 20.575897622573013, 00:20:19.770 "io_failed": 0, 00:20:19.770 "io_timeout": 0, 00:20:19.770 "avg_latency_us": 24092.74238326485, 00:20:19.770 "min_latency_us": 6799.36, 00:20:19.770 "max_latency_us": 65536.0 00:20:19.770 } 00:20:19.770 ], 00:20:19.770 "core_count": 1 00:20:19.770 } 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 665325 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 665325 ']' 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 665325 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 665325 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 665325' 00:20:19.770 killing process with pid 665325 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 665325 00:20:19.770 Received shutdown signal, test time was about 1.000000 seconds 00:20:19.770 00:20:19.770 Latency(us) 00:20:19.770 [2024-11-06T12:44:43.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.770 [2024-11-06T12:44:43.146Z] =================================================================================================================== 00:20:19.770 [2024-11-06T12:44:43.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.770 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 665325 00:20:19.770 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 664958 00:20:19.770 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 664958 ']' 00:20:19.770 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 664958 00:20:19.770 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 664958 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 664958' 00:20:20.032 killing process with pid 664958 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 664958 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 664958 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=665874 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 665874 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 665874 ']' 00:20:20.032 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.033 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.033 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.033 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.033 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.033 [2024-11-06 13:44:43.342693] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:20.033 [2024-11-06 13:44:43.342758] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.293 [2024-11-06 13:44:43.420798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.293 [2024-11-06 13:44:43.455784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.293 [2024-11-06 13:44:43.455819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.293 [2024-11-06 13:44:43.455827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.293 [2024-11-06 13:44:43.455834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.293 [2024-11-06 13:44:43.455840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.293 [2024-11-06 13:44:43.456413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.863 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.863 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:20.863 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.863 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.863 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.863 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.863 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:20.863 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.863 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.863 [2024-11-06 13:44:44.188471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.863 malloc0 00:20:20.863 [2024-11-06 13:44:44.215145] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.863 [2024-11-06 13:44:44.215387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=666046 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 666046 /var/tmp/bdevperf.sock 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 666046 ']' 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.124 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.124 [2024-11-06 13:44:44.294475] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:21.124 [2024-11-06 13:44:44.294523] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666046 ] 00:20:21.124 [2024-11-06 13:44:44.379280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.124 [2024-11-06 13:44:44.409138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.063 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.063 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:22.063 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X9PJpj8rCT 00:20:22.063 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:22.063 [2024-11-06 13:44:45.388537] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.323 nvme0n1 00:20:22.323 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.323 Running I/O for 1 seconds... 00:20:23.263 3268.00 IOPS, 12.77 MiB/s 00:20:23.263 Latency(us) 00:20:23.263 [2024-11-06T12:44:46.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.263 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:23.263 Verification LBA range: start 0x0 length 0x2000 00:20:23.263 nvme0n1 : 1.02 3337.11 13.04 0.00 0.00 38104.35 6498.99 85196.80 00:20:23.263 [2024-11-06T12:44:46.639Z] =================================================================================================================== 00:20:23.263 [2024-11-06T12:44:46.639Z] Total : 3337.11 13.04 0.00 0.00 38104.35 6498.99 85196.80 00:20:23.263 { 00:20:23.263 "results": [ 00:20:23.263 { 00:20:23.263 "job": "nvme0n1", 00:20:23.263 "core_mask": "0x2", 00:20:23.263 "workload": "verify", 00:20:23.263 "status": "finished", 00:20:23.263 "verify_range": { 00:20:23.263 "start": 0, 00:20:23.263 "length": 8192 00:20:23.263 }, 00:20:23.263 "queue_depth": 128, 00:20:23.263 "io_size": 4096, 00:20:23.263 "runtime": 1.017946, 00:20:23.263 "iops": 3337.1121847327854, 00:20:23.263 "mibps": 13.035594471612443, 00:20:23.263 "io_failed": 0, 00:20:23.263 "io_timeout": 0, 00:20:23.263 "avg_latency_us": 38104.35127465411, 00:20:23.263 "min_latency_us": 6498.986666666667, 00:20:23.263 "max_latency_us": 85196.8 00:20:23.263 } 00:20:23.263 ], 00:20:23.263 "core_count": 1 00:20:23.263 } 00:20:23.263 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:23.263 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.263 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.525 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.525 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:23.525 "subsystems": [ 00:20:23.525 { 00:20:23.525 "subsystem": "keyring", 00:20:23.525 "config": [ 00:20:23.525 { 00:20:23.525 "method": "keyring_file_add_key", 00:20:23.525 "params": { 00:20:23.525 "name": "key0", 00:20:23.525 "path": "/tmp/tmp.X9PJpj8rCT" 00:20:23.525 } 00:20:23.525 } 00:20:23.525 ] 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "subsystem": "iobuf", 00:20:23.525 "config": [ 00:20:23.525 { 00:20:23.525 "method": "iobuf_set_options", 00:20:23.525 "params": { 00:20:23.525 "small_pool_count": 8192, 00:20:23.525 "large_pool_count": 1024, 00:20:23.525 "small_bufsize": 8192, 00:20:23.525 "large_bufsize": 135168, 00:20:23.525 "enable_numa": false 00:20:23.525 } 00:20:23.525 } 00:20:23.525 ] 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "subsystem": "sock", 00:20:23.525 "config": [ 00:20:23.525 { 00:20:23.525 "method": "sock_set_default_impl", 00:20:23.525 "params": { 00:20:23.525 "impl_name": "posix" 00:20:23.525 } 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "method": "sock_impl_set_options", 00:20:23.525 "params": { 00:20:23.525 "impl_name": "ssl", 00:20:23.525 "recv_buf_size": 4096, 00:20:23.525 "send_buf_size": 4096, 00:20:23.525 "enable_recv_pipe": true, 00:20:23.525 "enable_quickack": false, 00:20:23.525 "enable_placement_id": 0, 00:20:23.525 "enable_zerocopy_send_server": true, 00:20:23.525 "enable_zerocopy_send_client": false, 00:20:23.525 "zerocopy_threshold": 0, 00:20:23.525 "tls_version": 0, 00:20:23.525 "enable_ktls": false 00:20:23.525 } 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "method": "sock_impl_set_options", 00:20:23.525 "params": { 00:20:23.525 "impl_name": "posix", 00:20:23.525 "recv_buf_size": 2097152, 00:20:23.525 "send_buf_size": 2097152, 00:20:23.525 "enable_recv_pipe": true, 00:20:23.525 "enable_quickack": false, 00:20:23.525 "enable_placement_id": 0, 00:20:23.525 "enable_zerocopy_send_server": true, 00:20:23.525 "enable_zerocopy_send_client": false, 00:20:23.525 "zerocopy_threshold": 0, 00:20:23.525 "tls_version": 0, 00:20:23.525 "enable_ktls": false 00:20:23.525 } 00:20:23.525 } 00:20:23.525 ] 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "subsystem": "vmd", 00:20:23.525 "config": [] 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "subsystem": "accel", 00:20:23.525 "config": [ 00:20:23.525 { 00:20:23.525 "method": "accel_set_options", 00:20:23.525 "params": { 00:20:23.525 "small_cache_size": 128, 00:20:23.525 "large_cache_size": 16, 00:20:23.525 "task_count": 2048, 00:20:23.525 "sequence_count": 2048, 00:20:23.525 "buf_count": 2048 00:20:23.525 } 00:20:23.525 } 00:20:23.525 ] 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "subsystem": "bdev", 00:20:23.525 "config": [ 00:20:23.525 { 00:20:23.525 "method": "bdev_set_options", 00:20:23.525 "params": { 00:20:23.525 "bdev_io_pool_size": 65535, 00:20:23.525 "bdev_io_cache_size": 256, 00:20:23.525 "bdev_auto_examine": true, 00:20:23.525 "iobuf_small_cache_size": 128, 00:20:23.525 "iobuf_large_cache_size": 16 00:20:23.525 } 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "method": "bdev_raid_set_options", 00:20:23.525 "params": { 00:20:23.525 "process_window_size_kb": 1024, 00:20:23.525 "process_max_bandwidth_mb_sec": 0 00:20:23.525 } 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "method": "bdev_iscsi_set_options", 00:20:23.525 "params": { 00:20:23.525 "timeout_sec": 30 00:20:23.525 } 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "method": "bdev_nvme_set_options", 00:20:23.525 "params": { 00:20:23.525 "action_on_timeout": "none", 00:20:23.525 "timeout_us": 0, 00:20:23.525 "timeout_admin_us": 0, 00:20:23.525 "keep_alive_timeout_ms": 10000, 00:20:23.525 "arbitration_burst": 0, 00:20:23.525 "low_priority_weight": 0, 00:20:23.525 "medium_priority_weight": 0, 00:20:23.525 "high_priority_weight": 0, 00:20:23.525 "nvme_adminq_poll_period_us": 10000, 00:20:23.525 "nvme_ioq_poll_period_us": 0, 00:20:23.525 "io_queue_requests": 0, 00:20:23.525 "delay_cmd_submit": true, 00:20:23.525 "transport_retry_count": 4, 00:20:23.525 "bdev_retry_count": 3, 00:20:23.525 "transport_ack_timeout": 0, 00:20:23.525 "ctrlr_loss_timeout_sec": 0, 00:20:23.525 "reconnect_delay_sec": 0, 00:20:23.525 "fast_io_fail_timeout_sec": 0, 00:20:23.525 "disable_auto_failback": false, 00:20:23.525 "generate_uuids": false, 00:20:23.525 "transport_tos": 0, 00:20:23.525 "nvme_error_stat": false, 00:20:23.525 "rdma_srq_size": 0, 00:20:23.525 "io_path_stat": false, 00:20:23.525 "allow_accel_sequence": false, 00:20:23.525 "rdma_max_cq_size": 0, 00:20:23.525 "rdma_cm_event_timeout_ms": 0, 00:20:23.525 "dhchap_digests": [ 00:20:23.525 "sha256", 00:20:23.525 "sha384", 00:20:23.525 "sha512" 00:20:23.525 ], 00:20:23.525 "dhchap_dhgroups": [ 00:20:23.525 "null", 00:20:23.525 "ffdhe2048", 00:20:23.525 "ffdhe3072", 00:20:23.525 "ffdhe4096", 00:20:23.525 "ffdhe6144", 00:20:23.525 "ffdhe8192" 00:20:23.525 ] 00:20:23.525 } 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "method": "bdev_nvme_set_hotplug", 00:20:23.525 "params": { 00:20:23.525 "period_us": 100000, 00:20:23.525 "enable": false 00:20:23.525 } 00:20:23.525 }, 00:20:23.525 { 00:20:23.525 "method": "bdev_malloc_create", 00:20:23.525 "params": { 00:20:23.525 "name": "malloc0", 00:20:23.525 "num_blocks": 8192, 00:20:23.525 "block_size": 4096, 00:20:23.526 "physical_block_size": 4096, 00:20:23.526 "uuid": "03a6f204-b29e-4bda-8589-ea7705725e8b", 00:20:23.526 "optimal_io_boundary": 0, 00:20:23.526 "md_size": 0, 00:20:23.526 "dif_type": 0, 00:20:23.526 "dif_is_head_of_md": false, 00:20:23.526 "dif_pi_format": 0 00:20:23.526 } 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "method": "bdev_wait_for_examine" 00:20:23.526 } 00:20:23.526 ] 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "subsystem": "nbd", 00:20:23.526 "config": [] 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "subsystem": "scheduler", 00:20:23.526 "config": [ 00:20:23.526 { 00:20:23.526 "method": "framework_set_scheduler", 00:20:23.526 "params": { 00:20:23.526 "name": "static" 00:20:23.526 } 00:20:23.526 } 00:20:23.526 ] 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "subsystem": "nvmf", 00:20:23.526 "config": [ 00:20:23.526 { 00:20:23.526 "method": "nvmf_set_config", 00:20:23.526 "params": { 00:20:23.526 "discovery_filter": "match_any", 00:20:23.526 "admin_cmd_passthru": { 00:20:23.526 "identify_ctrlr": false 00:20:23.526 }, 00:20:23.526 "dhchap_digests": [ 00:20:23.526 "sha256", 00:20:23.526 "sha384", 00:20:23.526 "sha512" 00:20:23.526 ], 00:20:23.526 "dhchap_dhgroups": [ 00:20:23.526 "null", 00:20:23.526 "ffdhe2048", 00:20:23.526 "ffdhe3072", 00:20:23.526 "ffdhe4096", 00:20:23.526 "ffdhe6144", 00:20:23.526 "ffdhe8192" 00:20:23.526 ] 00:20:23.526 } 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "method": "nvmf_set_max_subsystems", 00:20:23.526 "params": { 00:20:23.526 "max_subsystems": 1024 00:20:23.526 } 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "method": "nvmf_set_crdt", 00:20:23.526 "params": { 00:20:23.526 "crdt1": 0, 00:20:23.526 "crdt2": 0, 00:20:23.526 "crdt3": 0 00:20:23.526 } 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "method": "nvmf_create_transport", 00:20:23.526 "params": { 00:20:23.526 "trtype": "TCP", 00:20:23.526 "max_queue_depth": 128, 00:20:23.526 "max_io_qpairs_per_ctrlr": 127, 00:20:23.526 "in_capsule_data_size": 4096, 00:20:23.526 "max_io_size": 131072, 00:20:23.526 "io_unit_size": 131072, 00:20:23.526 "max_aq_depth": 128, 00:20:23.526 "num_shared_buffers": 511, 00:20:23.526 "buf_cache_size": 4294967295, 00:20:23.526 "dif_insert_or_strip": false, 00:20:23.526 "zcopy": false, 00:20:23.526 "c2h_success": false, 00:20:23.526 "sock_priority": 0, 00:20:23.526 "abort_timeout_sec": 1, 00:20:23.526 "ack_timeout": 0, 00:20:23.526 "data_wr_pool_size": 0 00:20:23.526 } 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "method": "nvmf_create_subsystem", 00:20:23.526 "params": { 00:20:23.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.526 "allow_any_host": false, 00:20:23.526 "serial_number": "00000000000000000000", 00:20:23.526 "model_number": "SPDK bdev Controller", 00:20:23.526 "max_namespaces": 32, 00:20:23.526 "min_cntlid": 1, 00:20:23.526 "max_cntlid": 65519, 00:20:23.526 "ana_reporting": false 00:20:23.526 } 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "method": "nvmf_subsystem_add_host", 00:20:23.526 "params": { 00:20:23.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.526 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.526 "psk": "key0" 00:20:23.526 } 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "method": "nvmf_subsystem_add_ns", 00:20:23.526 "params": { 00:20:23.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.526 "namespace": { 00:20:23.526 "nsid": 1, 00:20:23.526 "bdev_name": "malloc0", 00:20:23.526 "nguid": "03A6F204B29E4BDA8589EA7705725E8B", 00:20:23.526 "uuid": "03a6f204-b29e-4bda-8589-ea7705725e8b", 00:20:23.526 "no_auto_visible": false 00:20:23.526 } 00:20:23.526 } 00:20:23.526 }, 00:20:23.526 { 00:20:23.526 "method": "nvmf_subsystem_add_listener", 00:20:23.526 "params": { 00:20:23.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.526 "listen_address": { 00:20:23.526 "trtype": "TCP", 00:20:23.526 "adrfam": "IPv4", 00:20:23.526 "traddr": "10.0.0.2", 00:20:23.526 "trsvcid": "4420" 00:20:23.526 }, 00:20:23.526 "secure_channel": false, 00:20:23.526 "sock_impl": "ssl" 00:20:23.526 } 00:20:23.526 } 00:20:23.526 ] 00:20:23.526 } 00:20:23.526 ] 00:20:23.526 }' 00:20:23.526 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:23.787 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:23.787 "subsystems": [ 00:20:23.787 { 00:20:23.787 "subsystem": "keyring", 00:20:23.787 "config": [ 00:20:23.787 { 00:20:23.787 "method": "keyring_file_add_key", 00:20:23.787 "params": { 00:20:23.787 "name": "key0", 00:20:23.787 "path": "/tmp/tmp.X9PJpj8rCT" 00:20:23.787 } 00:20:23.787 } 00:20:23.787 ] 00:20:23.787 }, 00:20:23.787 { 00:20:23.787 "subsystem": "iobuf", 00:20:23.787 "config": [ 00:20:23.787 { 00:20:23.787 "method": "iobuf_set_options", 00:20:23.787 "params": { 00:20:23.787 "small_pool_count": 8192, 00:20:23.787 "large_pool_count": 1024, 00:20:23.787 "small_bufsize": 8192, 00:20:23.788 "large_bufsize": 135168, 00:20:23.788 "enable_numa": false 00:20:23.788 } 00:20:23.788 } 00:20:23.788 ] 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "subsystem": "sock", 00:20:23.788 "config": [ 00:20:23.788 { 00:20:23.788 "method": "sock_set_default_impl", 00:20:23.788 "params": { 00:20:23.788 "impl_name": "posix" 00:20:23.788 } 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "method": "sock_impl_set_options", 00:20:23.788 "params": { 00:20:23.788 "impl_name": "ssl", 00:20:23.788 "recv_buf_size": 4096, 00:20:23.788 "send_buf_size": 4096, 00:20:23.788 "enable_recv_pipe": true, 00:20:23.788 "enable_quickack": false, 00:20:23.788 "enable_placement_id": 0, 00:20:23.788 "enable_zerocopy_send_server": true, 00:20:23.788 "enable_zerocopy_send_client": false, 00:20:23.788 "zerocopy_threshold": 0, 00:20:23.788 "tls_version": 0, 00:20:23.788 "enable_ktls": false 00:20:23.788 } 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "method": "sock_impl_set_options", 00:20:23.788 "params": { 00:20:23.788 "impl_name": "posix", 00:20:23.788 "recv_buf_size": 2097152, 00:20:23.788 "send_buf_size": 2097152, 00:20:23.788 "enable_recv_pipe": true, 00:20:23.788 "enable_quickack": false, 00:20:23.788 "enable_placement_id": 0, 00:20:23.788 "enable_zerocopy_send_server": true, 00:20:23.788 "enable_zerocopy_send_client": false, 00:20:23.788 "zerocopy_threshold": 0, 00:20:23.788 "tls_version": 0, 00:20:23.788 "enable_ktls": false 00:20:23.788 } 00:20:23.788 } 00:20:23.788 ] 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "subsystem": "vmd", 00:20:23.788 "config": [] 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "subsystem": "accel", 00:20:23.788 "config": [ 00:20:23.788 { 00:20:23.788 "method": "accel_set_options", 00:20:23.788 "params": { 00:20:23.788 "small_cache_size": 128, 00:20:23.788 "large_cache_size": 16, 00:20:23.788 "task_count": 2048, 00:20:23.788 "sequence_count": 2048, 00:20:23.788 "buf_count": 2048 00:20:23.788 } 00:20:23.788 } 00:20:23.788 ] 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "subsystem": "bdev", 00:20:23.788 "config": [ 00:20:23.788 { 00:20:23.788 "method": "bdev_set_options", 00:20:23.788 "params": { 00:20:23.788 "bdev_io_pool_size": 65535, 00:20:23.788 "bdev_io_cache_size": 256, 00:20:23.788 "bdev_auto_examine": true, 00:20:23.788 "iobuf_small_cache_size": 128, 00:20:23.788 "iobuf_large_cache_size": 16 00:20:23.788 } 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "method": "bdev_raid_set_options", 00:20:23.788 "params": { 00:20:23.788 "process_window_size_kb": 1024, 00:20:23.788 "process_max_bandwidth_mb_sec": 0 00:20:23.788 } 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "method": "bdev_iscsi_set_options", 00:20:23.788 "params": { 00:20:23.788 "timeout_sec": 30 00:20:23.788 } 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "method": "bdev_nvme_set_options", 00:20:23.788 "params": { 00:20:23.788 "action_on_timeout": "none", 00:20:23.788 "timeout_us": 0, 00:20:23.788 "timeout_admin_us": 0, 00:20:23.788 "keep_alive_timeout_ms": 10000, 00:20:23.788 "arbitration_burst": 0, 00:20:23.788 "low_priority_weight": 0, 00:20:23.788 "medium_priority_weight": 0, 00:20:23.788 "high_priority_weight": 0, 00:20:23.788 "nvme_adminq_poll_period_us": 10000, 00:20:23.788 "nvme_ioq_poll_period_us": 0, 00:20:23.788 "io_queue_requests": 512, 00:20:23.788 "delay_cmd_submit": true, 00:20:23.788 "transport_retry_count": 4, 00:20:23.788 "bdev_retry_count": 3, 00:20:23.788 "transport_ack_timeout": 0, 00:20:23.788 "ctrlr_loss_timeout_sec": 0, 00:20:23.788 "reconnect_delay_sec": 0, 00:20:23.788 "fast_io_fail_timeout_sec": 0, 00:20:23.788 "disable_auto_failback": false, 00:20:23.788 "generate_uuids": false, 00:20:23.788 "transport_tos": 0, 00:20:23.788 "nvme_error_stat": false, 00:20:23.788 "rdma_srq_size": 0, 00:20:23.788 "io_path_stat": false, 00:20:23.788 "allow_accel_sequence": false, 00:20:23.788 "rdma_max_cq_size": 0, 00:20:23.788 "rdma_cm_event_timeout_ms": 0, 00:20:23.788 "dhchap_digests": [ 00:20:23.788 "sha256", 00:20:23.788 "sha384", 00:20:23.788 "sha512" 00:20:23.788 ], 00:20:23.788 "dhchap_dhgroups": [ 00:20:23.788 "null", 00:20:23.788 "ffdhe2048", 00:20:23.788 "ffdhe3072", 00:20:23.788 "ffdhe4096", 00:20:23.788 "ffdhe6144", 00:20:23.788 "ffdhe8192" 00:20:23.788 ] 00:20:23.788 } 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "method": "bdev_nvme_attach_controller", 00:20:23.788 "params": { 00:20:23.788 "name": "nvme0", 00:20:23.788 "trtype": "TCP", 00:20:23.788 "adrfam": "IPv4", 00:20:23.788 "traddr": "10.0.0.2", 00:20:23.788 "trsvcid": "4420", 00:20:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.788 "prchk_reftag": false, 00:20:23.788 "prchk_guard": false, 00:20:23.788 "ctrlr_loss_timeout_sec": 0, 00:20:23.788 "reconnect_delay_sec": 0, 00:20:23.788 "fast_io_fail_timeout_sec": 0, 00:20:23.788 "psk": "key0", 00:20:23.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.788 "hdgst": false, 00:20:23.788 "ddgst": false, 00:20:23.788 "multipath": "multipath" 00:20:23.788 } 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "method": "bdev_nvme_set_hotplug", 00:20:23.788 "params": { 00:20:23.788 "period_us": 100000, 00:20:23.788 "enable": false 00:20:23.788 } 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "method": "bdev_enable_histogram", 00:20:23.788 "params": { 00:20:23.788 "name": "nvme0n1", 00:20:23.788 "enable": true 00:20:23.788 } 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "method": "bdev_wait_for_examine" 00:20:23.788 } 00:20:23.788 ] 00:20:23.788 }, 00:20:23.788 { 00:20:23.788 "subsystem": "nbd", 00:20:23.788 "config": [] 00:20:23.788 } 00:20:23.788 ] 00:20:23.788 }' 00:20:23.788 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 666046 00:20:23.788 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 666046 ']' 00:20:23.788 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 666046 00:20:23.788 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:23.788 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:23.788 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 666046 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 666046' 00:20:23.788 killing process with pid 666046 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 666046 00:20:23.788 Received shutdown signal, test time was about 1.000000 seconds 00:20:23.788 00:20:23.788 Latency(us) 00:20:23.788 [2024-11-06T12:44:47.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.788 [2024-11-06T12:44:47.164Z] =================================================================================================================== 00:20:23.788 [2024-11-06T12:44:47.164Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 666046 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 665874 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 665874 ']' 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 665874 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:23.788 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 665874 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 665874' 00:20:24.049 killing process with pid 665874 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 665874 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 665874 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.049 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:24.049 "subsystems": [ 00:20:24.049 { 00:20:24.049 "subsystem": "keyring", 00:20:24.049 "config": [ 00:20:24.049 { 00:20:24.049 "method": "keyring_file_add_key", 00:20:24.049 "params": { 00:20:24.049 "name": "key0", 00:20:24.049 "path": "/tmp/tmp.X9PJpj8rCT" 00:20:24.049 } 00:20:24.049 } 00:20:24.049 ] 00:20:24.049 }, 00:20:24.049 { 00:20:24.049 "subsystem": "iobuf", 00:20:24.049 "config": [ 00:20:24.049 { 00:20:24.049 "method": "iobuf_set_options", 00:20:24.049 "params": { 00:20:24.050 "small_pool_count": 8192, 00:20:24.050 "large_pool_count": 1024, 00:20:24.050 "small_bufsize": 8192, 00:20:24.050 "large_bufsize": 135168, 00:20:24.050 "enable_numa": false 00:20:24.050 } 00:20:24.050 } 00:20:24.050 ] 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "subsystem": "sock", 00:20:24.050 "config": [ 00:20:24.050 { 00:20:24.050 "method": "sock_set_default_impl", 00:20:24.050 "params": { 00:20:24.050 "impl_name": "posix" 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "sock_impl_set_options", 00:20:24.050 "params": { 00:20:24.050 "impl_name": "ssl", 00:20:24.050 "recv_buf_size": 4096, 00:20:24.050 "send_buf_size": 4096, 00:20:24.050 "enable_recv_pipe": true, 00:20:24.050 "enable_quickack": false, 00:20:24.050 "enable_placement_id": 0, 00:20:24.050 "enable_zerocopy_send_server": true, 00:20:24.050 "enable_zerocopy_send_client": false, 00:20:24.050 "zerocopy_threshold": 0, 00:20:24.050 "tls_version": 0, 00:20:24.050 "enable_ktls": false 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "sock_impl_set_options", 00:20:24.050 "params": { 00:20:24.050 "impl_name": "posix", 00:20:24.050 "recv_buf_size": 2097152, 00:20:24.050 "send_buf_size": 2097152, 00:20:24.050 "enable_recv_pipe": true, 00:20:24.050 "enable_quickack": false, 00:20:24.050 "enable_placement_id": 0, 00:20:24.050 "enable_zerocopy_send_server": true, 00:20:24.050 "enable_zerocopy_send_client": false, 00:20:24.050 "zerocopy_threshold": 0, 00:20:24.050 "tls_version": 0, 00:20:24.050 "enable_ktls": false 00:20:24.050 } 00:20:24.050 } 00:20:24.050 ] 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "subsystem": "vmd", 00:20:24.050 "config": [] 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "subsystem": "accel", 00:20:24.050 "config": [ 00:20:24.050 { 00:20:24.050 "method": "accel_set_options", 00:20:24.050 "params": { 00:20:24.050 "small_cache_size": 128, 00:20:24.050 "large_cache_size": 16, 00:20:24.050 "task_count": 2048, 00:20:24.050 "sequence_count": 2048, 00:20:24.050 "buf_count": 2048 00:20:24.050 } 00:20:24.050 } 00:20:24.050 ] 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "subsystem": "bdev", 00:20:24.050 "config": [ 00:20:24.050 { 00:20:24.050 "method": "bdev_set_options", 00:20:24.050 "params": { 00:20:24.050 "bdev_io_pool_size": 65535, 00:20:24.050 "bdev_io_cache_size": 256, 00:20:24.050 "bdev_auto_examine": true, 00:20:24.050 "iobuf_small_cache_size": 128, 00:20:24.050 "iobuf_large_cache_size": 16 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "bdev_raid_set_options", 00:20:24.050 "params": { 00:20:24.050 "process_window_size_kb": 1024, 00:20:24.050 "process_max_bandwidth_mb_sec": 0 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "bdev_iscsi_set_options", 00:20:24.050 "params": { 00:20:24.050 "timeout_sec": 30 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "bdev_nvme_set_options", 00:20:24.050 "params": { 00:20:24.050 "action_on_timeout": "none", 00:20:24.050 "timeout_us": 0, 00:20:24.050 "timeout_admin_us": 0, 00:20:24.050 "keep_alive_timeout_ms": 10000, 00:20:24.050 "arbitration_burst": 0, 00:20:24.050 "low_priority_weight": 0, 00:20:24.050 "medium_priority_weight": 0, 00:20:24.050 "high_priority_weight": 0, 00:20:24.050 "nvme_adminq_poll_period_us": 10000, 00:20:24.050 "nvme_ioq_poll_period_us": 0, 00:20:24.050 "io_queue_requests": 0, 00:20:24.050 "delay_cmd_submit": true, 00:20:24.050 "transport_retry_count": 4, 00:20:24.050 "bdev_retry_count": 3, 00:20:24.050 "transport_ack_timeout": 0, 00:20:24.050 "ctrlr_loss_timeout_sec": 0, 00:20:24.050 "reconnect_delay_sec": 0, 00:20:24.050 "fast_io_fail_timeout_sec": 0, 00:20:24.050 "disable_auto_failback": false, 00:20:24.050 "generate_uuids": false, 00:20:24.050 "transport_tos": 0, 00:20:24.050 "nvme_error_stat": false, 00:20:24.050 "rdma_srq_size": 0, 00:20:24.050 "io_path_stat": false, 00:20:24.050 "allow_accel_sequence": false, 00:20:24.050 "rdma_max_cq_size": 0, 00:20:24.050 "rdma_cm_event_timeout_ms": 0, 00:20:24.050 "dhchap_digests": [ 00:20:24.050 "sha256", 00:20:24.050 "sha384", 00:20:24.050 "sha512" 00:20:24.050 ], 00:20:24.050 "dhchap_dhgroups": [ 00:20:24.050 "null", 00:20:24.050 "ffdhe2048", 00:20:24.050 "ffdhe3072", 00:20:24.050 "ffdhe4096", 00:20:24.050 "ffdhe6144", 00:20:24.050 "ffdhe8192" 00:20:24.050 ] 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "bdev_nvme_set_hotplug", 00:20:24.050 "params": { 00:20:24.050 "period_us": 100000, 00:20:24.050 "enable": false 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "bdev_malloc_create", 00:20:24.050 "params": { 00:20:24.050 "name": "malloc0", 00:20:24.050 "num_blocks": 8192, 00:20:24.050 "block_size": 4096, 00:20:24.050 "physical_block_size": 4096, 00:20:24.050 "uuid": "03a6f204-b29e-4bda-8589-ea7705725e8b", 00:20:24.050 "optimal_io_boundary": 0, 00:20:24.050 "md_size": 0, 00:20:24.050 "dif_type": 0, 00:20:24.050 "dif_is_head_of_md": false, 00:20:24.050 "dif_pi_format": 0 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "bdev_wait_for_examine" 00:20:24.050 } 00:20:24.050 ] 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "subsystem": "nbd", 00:20:24.050 "config": [] 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "subsystem": "scheduler", 00:20:24.050 "config": [ 00:20:24.050 { 00:20:24.050 "method": "framework_set_scheduler", 00:20:24.050 "params": { 00:20:24.050 "name": "static" 00:20:24.050 } 00:20:24.050 } 00:20:24.050 ] 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "subsystem": "nvmf", 00:20:24.050 "config": [ 00:20:24.050 { 00:20:24.050 "method": "nvmf_set_config", 00:20:24.050 "params": { 00:20:24.050 "discovery_filter": "match_any", 00:20:24.050 "admin_cmd_passthru": { 00:20:24.050 "identify_ctrlr": false 00:20:24.050 }, 00:20:24.050 "dhchap_digests": [ 00:20:24.050 "sha256", 00:20:24.050 "sha384", 00:20:24.050 "sha512" 00:20:24.050 ], 00:20:24.050 "dhchap_dhgroups": [ 00:20:24.050 "null", 00:20:24.050 "ffdhe2048", 00:20:24.050 "ffdhe3072", 00:20:24.050 "ffdhe4096", 00:20:24.050 "ffdhe6144", 00:20:24.050 "ffdhe8192" 00:20:24.050 ] 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "nvmf_set_max_subsystems", 00:20:24.050 "params": { 00:20:24.050 "max_subsystems": 1024 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "nvmf_set_crdt", 00:20:24.050 "params": { 00:20:24.050 "crdt1": 0, 00:20:24.050 "crdt2": 0, 00:20:24.050 "crdt3": 0 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "nvmf_create_transport", 00:20:24.050 "params": { 00:20:24.050 "trtype": "TCP", 00:20:24.050 "max_queue_depth": 128, 00:20:24.050 "max_io_qpairs_per_ctrlr": 127, 00:20:24.050 "in_capsule_data_size": 4096, 00:20:24.050 "max_io_size": 131072, 00:20:24.050 "io_unit_size": 131072, 00:20:24.050 "max_aq_depth": 128, 00:20:24.050 "num_shared_buffers": 511, 00:20:24.050 "buf_cache_size": 4294967295, 00:20:24.050 "dif_insert_or_strip": false, 00:20:24.050 "zcopy": false, 00:20:24.050 "c2h_success": false, 00:20:24.050 "sock_priority": 0, 00:20:24.050 "abort_timeout_sec": 1, 00:20:24.050 "ack_timeout": 0, 00:20:24.050 "data_wr_pool_size": 0 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "nvmf_create_subsystem", 00:20:24.050 "params": { 00:20:24.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.050 "allow_any_host": false, 00:20:24.050 "serial_number": "00000000000000000000", 00:20:24.050 "model_number": "SPDK bdev Controller", 00:20:24.050 "max_namespaces": 32, 00:20:24.050 "min_cntlid": 1, 00:20:24.050 "max_cntlid": 65519, 00:20:24.050 "ana_reporting": false 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "nvmf_subsystem_add_host", 00:20:24.050 "params": { 00:20:24.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.050 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.050 "psk": "key0" 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "nvmf_subsystem_add_ns", 00:20:24.050 "params": { 00:20:24.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.050 "namespace": { 00:20:24.050 "nsid": 1, 00:20:24.050 "bdev_name": "malloc0", 00:20:24.050 "nguid": "03A6F204B29E4BDA8589EA7705725E8B", 00:20:24.050 "uuid": "03a6f204-b29e-4bda-8589-ea7705725e8b", 00:20:24.050 "no_auto_visible": false 00:20:24.050 } 00:20:24.050 } 00:20:24.050 }, 00:20:24.050 { 00:20:24.050 "method": "nvmf_subsystem_add_listener", 00:20:24.050 "params": { 00:20:24.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.050 "listen_address": { 00:20:24.050 "trtype": "TCP", 00:20:24.050 "adrfam": "IPv4", 00:20:24.050 "traddr": "10.0.0.2", 00:20:24.050 "trsvcid": "4420" 00:20:24.050 }, 00:20:24.050 "secure_channel": false, 00:20:24.050 "sock_impl": "ssl" 00:20:24.050 } 00:20:24.050 } 00:20:24.050 ] 00:20:24.050 } 00:20:24.051 ] 00:20:24.051 }' 00:20:24.051 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=666729 00:20:24.051 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 666729 00:20:24.051 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:24.051 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 666729 ']' 00:20:24.051 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.051 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:24.051 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.051 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:24.051 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.051 [2024-11-06 13:44:47.396593] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:24.051 [2024-11-06 13:44:47.396653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.311 [2024-11-06 13:44:47.472623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.311 [2024-11-06 13:44:47.507663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.311 [2024-11-06 13:44:47.507699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.311 [2024-11-06 13:44:47.507707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.311 [2024-11-06 13:44:47.507713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.311 [2024-11-06 13:44:47.507719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.311 [2024-11-06 13:44:47.508291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.572 [2024-11-06 13:44:47.707116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.572 [2024-11-06 13:44:47.739129] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.572 [2024-11-06 13:44:47.739370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.832 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:24.832 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:24.832 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.832 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:24.832 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=666834 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 666834 /var/tmp/bdevperf.sock 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 666834 ']' 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:25.094 "subsystems": [ 00:20:25.094 { 00:20:25.094 "subsystem": "keyring", 00:20:25.094 "config": [ 00:20:25.094 { 00:20:25.094 "method": "keyring_file_add_key", 00:20:25.094 "params": { 00:20:25.094 "name": "key0", 00:20:25.094 "path": "/tmp/tmp.X9PJpj8rCT" 00:20:25.094 } 00:20:25.094 } 00:20:25.094 ] 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "subsystem": "iobuf", 00:20:25.094 "config": [ 00:20:25.094 { 00:20:25.094 "method": "iobuf_set_options", 00:20:25.094 "params": { 00:20:25.094 "small_pool_count": 8192, 00:20:25.094 "large_pool_count": 1024, 00:20:25.094 "small_bufsize": 8192, 00:20:25.094 "large_bufsize": 135168, 00:20:25.094 "enable_numa": false 00:20:25.094 } 00:20:25.094 } 00:20:25.094 ] 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "subsystem": "sock", 00:20:25.094 "config": [ 00:20:25.094 { 00:20:25.094 "method": "sock_set_default_impl", 00:20:25.094 "params": { 00:20:25.094 "impl_name": "posix" 00:20:25.094 } 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "method": "sock_impl_set_options", 00:20:25.094 "params": { 00:20:25.094 "impl_name": "ssl", 00:20:25.094 "recv_buf_size": 4096, 00:20:25.094 "send_buf_size": 4096, 00:20:25.094 "enable_recv_pipe": true, 00:20:25.094 "enable_quickack": false, 00:20:25.094 "enable_placement_id": 0, 00:20:25.094 "enable_zerocopy_send_server": true, 00:20:25.094 "enable_zerocopy_send_client": false, 00:20:25.094 "zerocopy_threshold": 0, 00:20:25.094 "tls_version": 0, 00:20:25.094 "enable_ktls": false 00:20:25.094 } 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "method": "sock_impl_set_options", 00:20:25.094 "params": { 00:20:25.094 "impl_name": "posix", 00:20:25.094 "recv_buf_size": 2097152, 00:20:25.094 "send_buf_size": 2097152, 00:20:25.094 "enable_recv_pipe": true, 00:20:25.094 "enable_quickack": false, 00:20:25.094 "enable_placement_id": 0, 00:20:25.094 "enable_zerocopy_send_server": true, 00:20:25.094 "enable_zerocopy_send_client": false, 00:20:25.094 "zerocopy_threshold": 0, 00:20:25.094 "tls_version": 0, 00:20:25.094 "enable_ktls": false 00:20:25.094 } 00:20:25.094 } 00:20:25.094 ] 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "subsystem": "vmd", 00:20:25.094 "config": [] 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "subsystem": "accel", 00:20:25.094 "config": [ 00:20:25.094 { 00:20:25.094 "method": "accel_set_options", 00:20:25.094 "params": { 00:20:25.094 "small_cache_size": 128, 00:20:25.094 "large_cache_size": 16, 00:20:25.094 "task_count": 2048, 00:20:25.094 "sequence_count": 2048, 00:20:25.094 "buf_count": 2048 00:20:25.094 } 00:20:25.094 } 00:20:25.094 ] 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "subsystem": "bdev", 00:20:25.094 "config": [ 00:20:25.094 { 00:20:25.094 "method": "bdev_set_options", 00:20:25.094 "params": { 00:20:25.094 "bdev_io_pool_size": 65535, 00:20:25.094 "bdev_io_cache_size": 256, 00:20:25.094 "bdev_auto_examine": true, 00:20:25.094 "iobuf_small_cache_size": 128, 00:20:25.094 "iobuf_large_cache_size": 16 00:20:25.094 } 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "method": "bdev_raid_set_options", 00:20:25.094 "params": { 00:20:25.094 "process_window_size_kb": 1024, 00:20:25.094 "process_max_bandwidth_mb_sec": 0 00:20:25.094 } 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "method": "bdev_iscsi_set_options", 00:20:25.094 "params": { 00:20:25.094 "timeout_sec": 30 00:20:25.094 } 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "method": "bdev_nvme_set_options", 00:20:25.094 "params": { 00:20:25.094 "action_on_timeout": "none", 00:20:25.094 "timeout_us": 0, 00:20:25.094 "timeout_admin_us": 0, 00:20:25.094 "keep_alive_timeout_ms": 10000, 00:20:25.094 "arbitration_burst": 0, 00:20:25.094 "low_priority_weight": 0, 00:20:25.094 "medium_priority_weight": 0, 00:20:25.094 "high_priority_weight": 0, 00:20:25.094 "nvme_adminq_poll_period_us": 10000, 00:20:25.094 "nvme_ioq_poll_period_us": 0, 00:20:25.094 "io_queue_requests": 512, 00:20:25.094 "delay_cmd_submit": true, 00:20:25.094 "transport_retry_count": 4, 00:20:25.094 "bdev_retry_count": 3, 00:20:25.094 "transport_ack_timeout": 0, 00:20:25.094 "ctrlr_loss_timeout_sec": 0, 00:20:25.094 "reconnect_delay_sec": 0, 00:20:25.094 "fast_io_fail_timeout_sec": 0, 00:20:25.094 "disable_auto_failback": false, 00:20:25.094 "generate_uuids": false, 00:20:25.094 "transport_tos": 0, 00:20:25.094 "nvme_error_stat": false, 00:20:25.094 "rdma_srq_size": 0, 00:20:25.094 "io_path_stat": false, 00:20:25.094 "allow_accel_sequence": false, 00:20:25.094 "rdma_max_cq_size": 0, 00:20:25.094 "rdma_cm_event_timeout_ms": 0, 00:20:25.094 "dhchap_digests": [ 00:20:25.094 "sha256", 00:20:25.094 "sha384", 00:20:25.094 "sha512" 00:20:25.094 ], 00:20:25.094 "dhchap_dhgroups": [ 00:20:25.094 "null", 00:20:25.094 "ffdhe2048", 00:20:25.094 "ffdhe3072", 00:20:25.094 "ffdhe4096", 00:20:25.094 "ffdhe6144", 00:20:25.094 "ffdhe8192" 00:20:25.094 ] 00:20:25.094 } 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "method": "bdev_nvme_attach_controller", 00:20:25.094 "params": { 00:20:25.094 "name": "nvme0", 00:20:25.094 "trtype": "TCP", 00:20:25.094 "adrfam": "IPv4", 00:20:25.094 "traddr": "10.0.0.2", 00:20:25.094 "trsvcid": "4420", 00:20:25.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.094 "prchk_reftag": false, 00:20:25.094 "prchk_guard": false, 00:20:25.094 "ctrlr_loss_timeout_sec": 0, 00:20:25.094 "reconnect_delay_sec": 0, 00:20:25.094 "fast_io_fail_timeout_sec": 0, 00:20:25.094 "psk": "key0", 00:20:25.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.094 "hdgst": false, 00:20:25.094 "ddgst": false, 00:20:25.094 "multipath": "multipath" 00:20:25.094 } 00:20:25.094 }, 00:20:25.094 { 00:20:25.094 "method": "bdev_nvme_set_hotplug", 00:20:25.094 "params": { 00:20:25.095 "period_us": 100000, 00:20:25.095 "enable": false 00:20:25.095 } 00:20:25.095 }, 00:20:25.095 { 00:20:25.095 "method": "bdev_enable_histogram", 00:20:25.095 "params": { 00:20:25.095 "name": "nvme0n1", 00:20:25.095 "enable": true 00:20:25.095 } 00:20:25.095 }, 00:20:25.095 { 00:20:25.095 "method": "bdev_wait_for_examine" 00:20:25.095 } 00:20:25.095 ] 00:20:25.095 }, 00:20:25.095 { 00:20:25.095 "subsystem": "nbd", 00:20:25.095 "config": [] 00:20:25.095 } 00:20:25.095 ] 00:20:25.095 }' 00:20:25.095 [2024-11-06 13:44:48.272957] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:25.095 [2024-11-06 13:44:48.273012] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666834 ] 00:20:25.095 [2024-11-06 13:44:48.330417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.095 [2024-11-06 13:44:48.360080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.355 [2024-11-06 13:44:48.495158] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.927 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:25.927 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:25.927 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:25.927 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:25.927 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.927 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:26.188 Running I/O for 1 seconds... 00:20:27.129 4904.00 IOPS, 19.16 MiB/s 00:20:27.129 Latency(us) 00:20:27.129 [2024-11-06T12:44:50.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.129 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:27.129 Verification LBA range: start 0x0 length 0x2000 00:20:27.129 nvme0n1 : 1.01 4961.67 19.38 0.00 0.00 25617.49 6307.84 56797.87 00:20:27.129 [2024-11-06T12:44:50.505Z] =================================================================================================================== 00:20:27.129 [2024-11-06T12:44:50.505Z] Total : 4961.67 19.38 0.00 0.00 25617.49 6307.84 56797.87 00:20:27.129 { 00:20:27.129 "results": [ 00:20:27.129 { 00:20:27.129 "job": "nvme0n1", 00:20:27.129 "core_mask": "0x2", 00:20:27.129 "workload": "verify", 00:20:27.129 "status": "finished", 00:20:27.129 "verify_range": { 00:20:27.129 "start": 0, 00:20:27.129 "length": 8192 00:20:27.129 }, 00:20:27.129 "queue_depth": 128, 00:20:27.129 "io_size": 4096, 00:20:27.129 "runtime": 1.014175, 00:20:27.129 "iops": 4961.668351122834, 00:20:27.129 "mibps": 19.38151699657357, 00:20:27.129 "io_failed": 0, 00:20:27.129 "io_timeout": 0, 00:20:27.129 "avg_latency_us": 25617.49265500795, 00:20:27.129 "min_latency_us": 6307.84, 00:20:27.129 "max_latency_us": 56797.86666666667 00:20:27.129 } 00:20:27.129 ], 00:20:27.129 "core_count": 1 00:20:27.129 } 00:20:27.129 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:27.129 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:27.129 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:27.129 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:27.129 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:27.130 nvmf_trace.0 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 666834 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 666834 ']' 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 666834 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:27.130 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 666834 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 666834' 00:20:27.390 killing process with pid 666834 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 666834 00:20:27.390 Received shutdown signal, test time was about 1.000000 seconds 00:20:27.390 00:20:27.390 Latency(us) 00:20:27.390 [2024-11-06T12:44:50.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.390 [2024-11-06T12:44:50.766Z] =================================================================================================================== 00:20:27.390 [2024-11-06T12:44:50.766Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 666834 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.390 rmmod nvme_tcp 00:20:27.390 rmmod nvme_fabrics 00:20:27.390 rmmod nvme_keyring 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 666729 ']' 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 666729 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 666729 ']' 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 666729 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:27.390 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 666729 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 666729' 00:20:27.650 killing process with pid 666729 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 666729 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 666729 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.650 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.192 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:30.193 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.2YUygqi0Zm /tmp/tmp.v7GbzbinxL /tmp/tmp.X9PJpj8rCT 00:20:30.193 00:20:30.193 real 1m23.064s 00:20:30.193 user 2m8.909s 00:20:30.193 sys 0m26.611s 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 ************************************ 00:20:30.193 END TEST nvmf_tls 00:20:30.193 ************************************ 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 ************************************ 00:20:30.193 START TEST nvmf_fips 00:20:30.193 ************************************ 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.193 * Looking for test storage... 00:20:30.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:30.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.193 --rc genhtml_branch_coverage=1 00:20:30.193 --rc genhtml_function_coverage=1 00:20:30.193 --rc genhtml_legend=1 00:20:30.193 --rc geninfo_all_blocks=1 00:20:30.193 --rc geninfo_unexecuted_blocks=1 00:20:30.193 00:20:30.193 ' 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:30.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.193 --rc genhtml_branch_coverage=1 00:20:30.193 --rc genhtml_function_coverage=1 00:20:30.193 --rc genhtml_legend=1 00:20:30.193 --rc geninfo_all_blocks=1 00:20:30.193 --rc geninfo_unexecuted_blocks=1 00:20:30.193 00:20:30.193 ' 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:30.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.193 --rc genhtml_branch_coverage=1 00:20:30.193 --rc genhtml_function_coverage=1 00:20:30.193 --rc genhtml_legend=1 00:20:30.193 --rc geninfo_all_blocks=1 00:20:30.193 --rc geninfo_unexecuted_blocks=1 00:20:30.193 00:20:30.193 ' 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:30.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.193 --rc genhtml_branch_coverage=1 00:20:30.193 --rc genhtml_function_coverage=1 00:20:30.193 --rc genhtml_legend=1 00:20:30.193 --rc geninfo_all_blocks=1 00:20:30.193 --rc geninfo_unexecuted_blocks=1 00:20:30.193 00:20:30.193 ' 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:30.193 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:30.194 Error setting digest 00:20:30.194 4092E97EC27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:30.194 4092E97EC27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.194 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.332 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:38.333 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:38.333 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:38.333 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:38.333 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.333 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:20:38.333 00:20:38.333 --- 10.0.0.2 ping statistics --- 00:20:38.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.333 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:20:38.333 00:20:38.333 --- 10.0.0.1 ping statistics --- 00:20:38.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.333 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=671755 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 671755 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 671755 ']' 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:38.333 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.333 [2024-11-06 13:45:01.146052] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:38.333 [2024-11-06 13:45:01.146113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.333 [2024-11-06 13:45:01.244197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.333 [2024-11-06 13:45:01.294552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.333 [2024-11-06 13:45:01.294604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.333 [2024-11-06 13:45:01.294612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.333 [2024-11-06 13:45:01.294620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.333 [2024-11-06 13:45:01.294626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.333 [2024-11-06 13:45:01.295421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.594 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:38.594 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:38.594 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.594 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.594 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.c66 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.c66 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.c66 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.c66 00:20:38.855 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:38.855 [2024-11-06 13:45:02.154885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.855 [2024-11-06 13:45:02.170878] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.855 [2024-11-06 13:45:02.171222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.855 malloc0 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=671924 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 671924 /var/tmp/bdevperf.sock 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 671924 ']' 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:39.116 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:39.116 [2024-11-06 13:45:02.311799] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:20:39.116 [2024-11-06 13:45:02.311885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671924 ] 00:20:39.116 [2024-11-06 13:45:02.376020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.116 [2024-11-06 13:45:02.414058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.059 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:40.059 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:40.059 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.c66 00:20:40.059 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.059 [2024-11-06 13:45:03.413918] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.320 TLSTESTn1 00:20:40.320 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.320 Running I/O for 10 seconds... 00:20:42.643 5251.00 IOPS, 20.51 MiB/s [2024-11-06T12:45:06.958Z] 5179.00 IOPS, 20.23 MiB/s [2024-11-06T12:45:07.896Z] 5325.67 IOPS, 20.80 MiB/s [2024-11-06T12:45:08.834Z] 5557.50 IOPS, 21.71 MiB/s [2024-11-06T12:45:09.774Z] 5432.40 IOPS, 21.22 MiB/s [2024-11-06T12:45:10.713Z] 5218.50 IOPS, 20.38 MiB/s [2024-11-06T12:45:11.652Z] 5122.86 IOPS, 20.01 MiB/s [2024-11-06T12:45:13.033Z] 5130.75 IOPS, 20.04 MiB/s [2024-11-06T12:45:13.976Z] 5143.00 IOPS, 20.09 MiB/s [2024-11-06T12:45:13.976Z] 5005.70 IOPS, 19.55 MiB/s 00:20:50.600 Latency(us) 00:20:50.600 [2024-11-06T12:45:13.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.600 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.600 Verification LBA range: start 0x0 length 0x2000 00:20:50.600 TLSTESTn1 : 10.02 5010.72 19.57 0.00 0.00 25513.36 5597.87 75147.95 00:20:50.600 [2024-11-06T12:45:13.976Z] =================================================================================================================== 00:20:50.600 [2024-11-06T12:45:13.976Z] Total : 5010.72 19.57 0.00 0.00 25513.36 5597.87 75147.95 00:20:50.600 { 00:20:50.600 "results": [ 00:20:50.600 { 00:20:50.600 "job": "TLSTESTn1", 00:20:50.600 "core_mask": "0x4", 00:20:50.600 "workload": "verify", 00:20:50.600 "status": "finished", 00:20:50.600 "verify_range": { 00:20:50.600 "start": 0, 00:20:50.600 "length": 8192 00:20:50.600 }, 00:20:50.600 "queue_depth": 128, 00:20:50.600 "io_size": 4096, 00:20:50.600 "runtime": 10.01553, 00:20:50.600 "iops": 5010.7183543956235, 00:20:50.600 "mibps": 19.573118571857904, 00:20:50.600 "io_failed": 0, 00:20:50.600 "io_timeout": 0, 00:20:50.600 "avg_latency_us": 25513.364111188603, 00:20:50.600 "min_latency_us": 5597.866666666667, 00:20:50.600 "max_latency_us": 75147.94666666667 00:20:50.600 } 00:20:50.600 ], 00:20:50.600 "core_count": 1 00:20:50.600 } 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:50.600 nvmf_trace.0 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 671924 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 671924 ']' 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 671924 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 671924 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 671924' 00:20:50.600 killing process with pid 671924 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 671924 00:20:50.600 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.600 00:20:50.600 Latency(us) 00:20:50.600 [2024-11-06T12:45:13.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.600 [2024-11-06T12:45:13.976Z] =================================================================================================================== 00:20:50.600 [2024-11-06T12:45:13.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 671924 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.600 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.600 rmmod nvme_tcp 00:20:50.600 rmmod nvme_fabrics 00:20:50.600 rmmod nvme_keyring 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 671755 ']' 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 671755 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 671755 ']' 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 671755 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:50.861 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 671755 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 671755' 00:20:50.861 killing process with pid 671755 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 671755 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 671755 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.861 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.c66 00:20:53.406 00:20:53.406 real 0m23.165s 00:20:53.406 user 0m24.176s 00:20:53.406 sys 0m10.173s 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.406 ************************************ 00:20:53.406 END TEST nvmf_fips 00:20:53.406 ************************************ 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:53.406 ************************************ 00:20:53.406 START TEST nvmf_control_msg_list 00:20:53.406 ************************************ 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:53.406 * Looking for test storage... 00:20:53.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.406 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.406 --rc genhtml_branch_coverage=1 00:20:53.406 --rc genhtml_function_coverage=1 00:20:53.406 --rc genhtml_legend=1 00:20:53.406 --rc geninfo_all_blocks=1 00:20:53.406 --rc geninfo_unexecuted_blocks=1 00:20:53.407 00:20:53.407 ' 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:53.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.407 --rc genhtml_branch_coverage=1 00:20:53.407 --rc genhtml_function_coverage=1 00:20:53.407 --rc genhtml_legend=1 00:20:53.407 --rc geninfo_all_blocks=1 00:20:53.407 --rc geninfo_unexecuted_blocks=1 00:20:53.407 00:20:53.407 ' 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:53.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.407 --rc genhtml_branch_coverage=1 00:20:53.407 --rc genhtml_function_coverage=1 00:20:53.407 --rc genhtml_legend=1 00:20:53.407 --rc geninfo_all_blocks=1 00:20:53.407 --rc geninfo_unexecuted_blocks=1 00:20:53.407 00:20:53.407 ' 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:53.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.407 --rc genhtml_branch_coverage=1 00:20:53.407 --rc genhtml_function_coverage=1 00:20:53.407 --rc genhtml_legend=1 00:20:53.407 --rc geninfo_all_blocks=1 00:20:53.407 --rc geninfo_unexecuted_blocks=1 00:20:53.407 00:20:53.407 ' 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:53.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.407 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.408 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.408 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.408 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.408 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.408 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.408 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:53.408 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:53.408 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:53.408 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:01.576 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:01.576 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:01.576 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.576 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:01.576 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:01.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:21:01.577 00:21:01.577 --- 10.0.0.2 ping statistics --- 00:21:01.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.577 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:21:01.577 00:21:01.577 --- 10.0.0.1 ping statistics --- 00:21:01.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.577 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=678906 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 678906 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 678906 ']' 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:01.577 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.577 [2024-11-06 13:45:23.917052] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:21:01.577 [2024-11-06 13:45:23.917124] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.577 [2024-11-06 13:45:24.000442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.577 [2024-11-06 13:45:24.041095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.577 [2024-11-06 13:45:24.041132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.577 [2024-11-06 13:45:24.041140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.577 [2024-11-06 13:45:24.041147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.577 [2024-11-06 13:45:24.041153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.577 [2024-11-06 13:45:24.041745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.577 [2024-11-06 13:45:24.759533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.577 Malloc0 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.577 [2024-11-06 13:45:24.810458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.577 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=679075 00:21:01.578 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:01.578 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=679076 00:21:01.578 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:01.578 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=679077 00:21:01.578 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 679075 00:21:01.578 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:01.578 [2024-11-06 13:45:24.880856] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:01.578 [2024-11-06 13:45:24.911025] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:01.578 [2024-11-06 13:45:24.911311] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:02.963 Initializing NVMe Controllers 00:21:02.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:02.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:02.963 Initialization complete. Launching workers. 00:21:02.963 ======================================================== 00:21:02.963 Latency(us) 00:21:02.963 Device Information : IOPS MiB/s Average min max 00:21:02.963 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40926.68 40854.48 41503.27 00:21:02.963 ======================================================== 00:21:02.963 Total : 25.00 0.10 40926.68 40854.48 41503.27 00:21:02.963 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 679076 00:21:02.963 Initializing NVMe Controllers 00:21:02.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:02.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:02.963 Initialization complete. Launching workers. 00:21:02.963 ======================================================== 00:21:02.963 Latency(us) 00:21:02.963 Device Information : IOPS MiB/s Average min max 00:21:02.963 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40922.62 40811.02 41437.38 00:21:02.963 ======================================================== 00:21:02.963 Total : 25.00 0.10 40922.62 40811.02 41437.38 00:21:02.963 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 679077 00:21:02.963 Initializing NVMe Controllers 00:21:02.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:02.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:02.963 Initialization complete. Launching workers. 00:21:02.963 ======================================================== 00:21:02.963 Latency(us) 00:21:02.963 Device Information : IOPS MiB/s Average min max 00:21:02.963 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 26.00 0.10 39344.06 406.27 41244.57 00:21:02.963 ======================================================== 00:21:02.963 Total : 26.00 0.10 39344.06 406.27 41244.57 00:21:02.963 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.963 rmmod nvme_tcp 00:21:02.963 rmmod nvme_fabrics 00:21:02.963 rmmod nvme_keyring 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 678906 ']' 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 678906 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 678906 ']' 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 678906 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:02.963 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 678906 00:21:02.964 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:02.964 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:02.964 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 678906' 00:21:02.964 killing process with pid 678906 00:21:02.964 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 678906 00:21:02.964 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 678906 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.224 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.133 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.133 00:21:05.133 real 0m12.165s 00:21:05.133 user 0m8.093s 00:21:05.133 sys 0m6.323s 00:21:05.133 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:05.133 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:05.133 ************************************ 00:21:05.133 END TEST nvmf_control_msg_list 00:21:05.133 ************************************ 00:21:05.392 13:45:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:05.392 13:45:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:05.392 13:45:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:05.392 13:45:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.392 ************************************ 00:21:05.392 START TEST nvmf_wait_for_buf 00:21:05.392 ************************************ 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:05.393 * Looking for test storage... 00:21:05.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.393 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:05.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.654 --rc genhtml_branch_coverage=1 00:21:05.654 --rc genhtml_function_coverage=1 00:21:05.654 --rc genhtml_legend=1 00:21:05.654 --rc geninfo_all_blocks=1 00:21:05.654 --rc geninfo_unexecuted_blocks=1 00:21:05.654 00:21:05.654 ' 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:05.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.654 --rc genhtml_branch_coverage=1 00:21:05.654 --rc genhtml_function_coverage=1 00:21:05.654 --rc genhtml_legend=1 00:21:05.654 --rc geninfo_all_blocks=1 00:21:05.654 --rc geninfo_unexecuted_blocks=1 00:21:05.654 00:21:05.654 ' 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:05.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.654 --rc genhtml_branch_coverage=1 00:21:05.654 --rc genhtml_function_coverage=1 00:21:05.654 --rc genhtml_legend=1 00:21:05.654 --rc geninfo_all_blocks=1 00:21:05.654 --rc geninfo_unexecuted_blocks=1 00:21:05.654 00:21:05.654 ' 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:05.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.654 --rc genhtml_branch_coverage=1 00:21:05.654 --rc genhtml_function_coverage=1 00:21:05.654 --rc genhtml_legend=1 00:21:05.654 --rc geninfo_all_blocks=1 00:21:05.654 --rc geninfo_unexecuted_blocks=1 00:21:05.654 00:21:05.654 ' 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.654 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.655 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:13.798 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:13.798 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.798 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:13.799 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:13.799 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.799 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:21:13.799 00:21:13.799 --- 10.0.0.2 ping statistics --- 00:21:13.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.799 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:21:13.799 00:21:13.799 --- 10.0.0.1 ping statistics --- 00:21:13.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.799 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=683547 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 683547 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 683547 ']' 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:13.799 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.799 [2024-11-06 13:45:36.308161] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:21:13.799 [2024-11-06 13:45:36.308226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.799 [2024-11-06 13:45:36.391458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.799 [2024-11-06 13:45:36.432120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.799 [2024-11-06 13:45:36.432155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.799 [2024-11-06 13:45:36.432163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.799 [2024-11-06 13:45:36.432169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.799 [2024-11-06 13:45:36.432175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.799 [2024-11-06 13:45:36.432775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.799 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 Malloc0 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 [2024-11-06 13:45:37.235757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 [2024-11-06 13:45:37.271978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.060 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:14.060 [2024-11-06 13:45:37.378828] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:15.443 Initializing NVMe Controllers 00:21:15.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:15.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:15.443 Initialization complete. Launching workers. 00:21:15.443 ======================================================== 00:21:15.443 Latency(us) 00:21:15.443 Device Information : IOPS MiB/s Average min max 00:21:15.443 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32241.39 7999.70 62862.50 00:21:15.443 ======================================================== 00:21:15.443 Total : 129.00 16.12 32241.39 7999.70 62862.50 00:21:15.443 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.443 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.443 rmmod nvme_tcp 00:21:15.703 rmmod nvme_fabrics 00:21:15.703 rmmod nvme_keyring 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 683547 ']' 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 683547 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 683547 ']' 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 683547 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 683547 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:15.703 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 683547' 00:21:15.703 killing process with pid 683547 00:21:15.704 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 683547 00:21:15.704 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 683547 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.704 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.248 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:18.248 00:21:18.248 real 0m12.571s 00:21:18.248 user 0m5.022s 00:21:18.248 sys 0m6.101s 00:21:18.248 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:18.248 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:18.248 ************************************ 00:21:18.248 END TEST nvmf_wait_for_buf 00:21:18.248 ************************************ 00:21:18.248 13:45:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:18.248 13:45:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:18.248 13:45:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:18.248 13:45:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:18.248 13:45:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:18.248 13:45:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.833 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.833 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.833 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:24.834 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:24.834 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:24.834 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:24.834 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.834 ************************************ 00:21:24.834 START TEST nvmf_perf_adq 00:21:24.834 ************************************ 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:24.834 * Looking for test storage... 00:21:24.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:24.834 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.095 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:25.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.095 --rc genhtml_branch_coverage=1 00:21:25.095 --rc genhtml_function_coverage=1 00:21:25.095 --rc genhtml_legend=1 00:21:25.095 --rc geninfo_all_blocks=1 00:21:25.095 --rc geninfo_unexecuted_blocks=1 00:21:25.095 00:21:25.096 ' 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:25.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.096 --rc genhtml_branch_coverage=1 00:21:25.096 --rc genhtml_function_coverage=1 00:21:25.096 --rc genhtml_legend=1 00:21:25.096 --rc geninfo_all_blocks=1 00:21:25.096 --rc geninfo_unexecuted_blocks=1 00:21:25.096 00:21:25.096 ' 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:25.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.096 --rc genhtml_branch_coverage=1 00:21:25.096 --rc genhtml_function_coverage=1 00:21:25.096 --rc genhtml_legend=1 00:21:25.096 --rc geninfo_all_blocks=1 00:21:25.096 --rc geninfo_unexecuted_blocks=1 00:21:25.096 00:21:25.096 ' 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:25.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.096 --rc genhtml_branch_coverage=1 00:21:25.096 --rc genhtml_function_coverage=1 00:21:25.096 --rc genhtml_legend=1 00:21:25.096 --rc geninfo_all_blocks=1 00:21:25.096 --rc geninfo_unexecuted_blocks=1 00:21:25.096 00:21:25.096 ' 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.096 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:33.234 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:33.234 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:33.234 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:33.234 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:33.234 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:33.495 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:35.407 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:40.692 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:40.692 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:40.692 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:40.692 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:21:40.692 00:21:40.692 --- 10.0.0.2 ping statistics --- 00:21:40.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.692 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:21:40.692 00:21:40.692 --- 10.0.0.1 ping statistics --- 00:21:40.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.692 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.692 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.693 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=693656 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 693656 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 693656 ']' 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:40.693 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.953 [2024-11-06 13:46:04.093726] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:21:40.953 [2024-11-06 13:46:04.093803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.953 [2024-11-06 13:46:04.176310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.953 [2024-11-06 13:46:04.219995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.953 [2024-11-06 13:46:04.220032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.953 [2024-11-06 13:46:04.220040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.953 [2024-11-06 13:46:04.220047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.953 [2024-11-06 13:46:04.220053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.953 [2024-11-06 13:46:04.221619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.953 [2024-11-06 13:46:04.221723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.953 [2024-11-06 13:46:04.221866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.953 [2024-11-06 13:46:04.222042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.523 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:41.523 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:41.523 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.523 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:41.523 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.783 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.783 [2024-11-06 13:46:05.067827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.783 Malloc1 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.783 [2024-11-06 13:46:05.137081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=694011 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:41.783 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:44.321 "tick_rate": 2400000000, 00:21:44.321 "poll_groups": [ 00:21:44.321 { 00:21:44.321 "name": "nvmf_tgt_poll_group_000", 00:21:44.321 "admin_qpairs": 1, 00:21:44.321 "io_qpairs": 1, 00:21:44.321 "current_admin_qpairs": 1, 00:21:44.321 "current_io_qpairs": 1, 00:21:44.321 "pending_bdev_io": 0, 00:21:44.321 "completed_nvme_io": 19133, 00:21:44.321 "transports": [ 00:21:44.321 { 00:21:44.321 "trtype": "TCP" 00:21:44.321 } 00:21:44.321 ] 00:21:44.321 }, 00:21:44.321 { 00:21:44.321 "name": "nvmf_tgt_poll_group_001", 00:21:44.321 "admin_qpairs": 0, 00:21:44.321 "io_qpairs": 1, 00:21:44.321 "current_admin_qpairs": 0, 00:21:44.321 "current_io_qpairs": 1, 00:21:44.321 "pending_bdev_io": 0, 00:21:44.321 "completed_nvme_io": 27544, 00:21:44.321 "transports": [ 00:21:44.321 { 00:21:44.321 "trtype": "TCP" 00:21:44.321 } 00:21:44.321 ] 00:21:44.321 }, 00:21:44.321 { 00:21:44.321 "name": "nvmf_tgt_poll_group_002", 00:21:44.321 "admin_qpairs": 0, 00:21:44.321 "io_qpairs": 1, 00:21:44.321 "current_admin_qpairs": 0, 00:21:44.321 "current_io_qpairs": 1, 00:21:44.321 "pending_bdev_io": 0, 00:21:44.321 "completed_nvme_io": 21956, 00:21:44.321 "transports": [ 00:21:44.321 { 00:21:44.321 "trtype": "TCP" 00:21:44.321 } 00:21:44.321 ] 00:21:44.321 }, 00:21:44.321 { 00:21:44.321 "name": "nvmf_tgt_poll_group_003", 00:21:44.321 "admin_qpairs": 0, 00:21:44.321 "io_qpairs": 1, 00:21:44.321 "current_admin_qpairs": 0, 00:21:44.321 "current_io_qpairs": 1, 00:21:44.321 "pending_bdev_io": 0, 00:21:44.321 "completed_nvme_io": 19740, 00:21:44.321 "transports": [ 00:21:44.321 { 00:21:44.321 "trtype": "TCP" 00:21:44.321 } 00:21:44.321 ] 00:21:44.321 } 00:21:44.321 ] 00:21:44.321 }' 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:44.321 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 694011 00:21:52.456 Initializing NVMe Controllers 00:21:52.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:52.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:52.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:52.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:52.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:52.456 Initialization complete. Launching workers. 00:21:52.456 ======================================================== 00:21:52.456 Latency(us) 00:21:52.456 Device Information : IOPS MiB/s Average min max 00:21:52.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11178.67 43.67 5725.81 1463.03 9024.81 00:21:52.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14401.46 56.26 4444.17 1100.69 10095.43 00:21:52.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14533.56 56.77 4403.59 1199.97 10876.86 00:21:52.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13370.97 52.23 4786.25 1369.34 10365.49 00:21:52.456 ======================================================== 00:21:52.456 Total : 53484.66 208.92 4786.53 1100.69 10876.86 00:21:52.456 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.456 rmmod nvme_tcp 00:21:52.456 rmmod nvme_fabrics 00:21:52.456 rmmod nvme_keyring 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 693656 ']' 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 693656 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 693656 ']' 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 693656 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 693656 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 693656' 00:21:52.456 killing process with pid 693656 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 693656 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 693656 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.456 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.500 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.500 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:54.500 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:54.500 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:55.909 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:57.820 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.111 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:03.112 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:03.112 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:03.112 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:03.112 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.112 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:03.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:22:03.113 00:22:03.113 --- 10.0.0.2 ping statistics --- 00:22:03.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.113 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:22:03.113 00:22:03.113 --- 10.0.0.1 ping statistics --- 00:22:03.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.113 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:03.113 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:03.374 net.core.busy_poll = 1 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:03.374 net.core.busy_read = 1 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=698481 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 698481 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 698481 ']' 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:03.374 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.635 [2024-11-06 13:46:26.789439] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:22:03.636 [2024-11-06 13:46:26.789495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.636 [2024-11-06 13:46:26.873642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.636 [2024-11-06 13:46:26.911721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.636 [2024-11-06 13:46:26.911762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.636 [2024-11-06 13:46:26.911770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.636 [2024-11-06 13:46:26.911777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.636 [2024-11-06 13:46:26.911786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.636 [2024-11-06 13:46:26.913304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.636 [2024-11-06 13:46:26.913416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.636 [2024-11-06 13:46:26.913571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.636 [2024-11-06 13:46:26.913573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.579 [2024-11-06 13:46:27.763515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.579 Malloc1 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.579 [2024-11-06 13:46:27.833133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=698835 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:04.579 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.494 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:06.494 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.494 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.494 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.494 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:06.494 "tick_rate": 2400000000, 00:22:06.494 "poll_groups": [ 00:22:06.494 { 00:22:06.494 "name": "nvmf_tgt_poll_group_000", 00:22:06.494 "admin_qpairs": 1, 00:22:06.494 "io_qpairs": 2, 00:22:06.494 "current_admin_qpairs": 1, 00:22:06.494 "current_io_qpairs": 2, 00:22:06.494 "pending_bdev_io": 0, 00:22:06.494 "completed_nvme_io": 30427, 00:22:06.494 "transports": [ 00:22:06.494 { 00:22:06.494 "trtype": "TCP" 00:22:06.494 } 00:22:06.494 ] 00:22:06.494 }, 00:22:06.494 { 00:22:06.494 "name": "nvmf_tgt_poll_group_001", 00:22:06.494 "admin_qpairs": 0, 00:22:06.494 "io_qpairs": 2, 00:22:06.494 "current_admin_qpairs": 0, 00:22:06.494 "current_io_qpairs": 2, 00:22:06.494 "pending_bdev_io": 0, 00:22:06.494 "completed_nvme_io": 39718, 00:22:06.494 "transports": [ 00:22:06.494 { 00:22:06.494 "trtype": "TCP" 00:22:06.494 } 00:22:06.494 ] 00:22:06.494 }, 00:22:06.494 { 00:22:06.494 "name": "nvmf_tgt_poll_group_002", 00:22:06.494 "admin_qpairs": 0, 00:22:06.494 "io_qpairs": 0, 00:22:06.494 "current_admin_qpairs": 0, 00:22:06.494 "current_io_qpairs": 0, 00:22:06.494 "pending_bdev_io": 0, 00:22:06.494 "completed_nvme_io": 0, 00:22:06.494 "transports": [ 00:22:06.494 { 00:22:06.494 "trtype": "TCP" 00:22:06.494 } 00:22:06.494 ] 00:22:06.494 }, 00:22:06.494 { 00:22:06.494 "name": "nvmf_tgt_poll_group_003", 00:22:06.494 "admin_qpairs": 0, 00:22:06.494 "io_qpairs": 0, 00:22:06.494 "current_admin_qpairs": 0, 00:22:06.494 "current_io_qpairs": 0, 00:22:06.494 "pending_bdev_io": 0, 00:22:06.494 "completed_nvme_io": 0, 00:22:06.494 "transports": [ 00:22:06.494 { 00:22:06.494 "trtype": "TCP" 00:22:06.494 } 00:22:06.494 ] 00:22:06.494 } 00:22:06.494 ] 00:22:06.494 }' 00:22:06.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:06.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:06.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:06.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:06.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 698835 00:22:14.895 Initializing NVMe Controllers 00:22:14.895 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:14.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:14.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:14.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:14.895 Initialization complete. Launching workers. 00:22:14.895 ======================================================== 00:22:14.895 Latency(us) 00:22:14.895 Device Information : IOPS MiB/s Average min max 00:22:14.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12124.70 47.36 5279.89 1133.94 49418.53 00:22:14.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8586.80 33.54 7453.34 1355.87 49997.16 00:22:14.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11788.80 46.05 5440.60 1107.41 50319.29 00:22:14.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7767.90 30.34 8267.65 1270.40 52154.24 00:22:14.895 ======================================================== 00:22:14.895 Total : 40268.20 157.30 6366.76 1107.41 52154.24 00:22:14.895 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.895 rmmod nvme_tcp 00:22:14.895 rmmod nvme_fabrics 00:22:14.895 rmmod nvme_keyring 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 698481 ']' 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 698481 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 698481 ']' 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 698481 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 698481 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 698481' 00:22:14.895 killing process with pid 698481 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 698481 00:22:14.895 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 698481 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.157 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:18.458 00:22:18.458 real 0m53.343s 00:22:18.458 user 2m49.866s 00:22:18.458 sys 0m11.268s 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.458 ************************************ 00:22:18.458 END TEST nvmf_perf_adq 00:22:18.458 ************************************ 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:18.458 ************************************ 00:22:18.458 START TEST nvmf_shutdown 00:22:18.458 ************************************ 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:18.458 * Looking for test storage... 00:22:18.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:18.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.458 --rc genhtml_branch_coverage=1 00:22:18.458 --rc genhtml_function_coverage=1 00:22:18.458 --rc genhtml_legend=1 00:22:18.458 --rc geninfo_all_blocks=1 00:22:18.458 --rc geninfo_unexecuted_blocks=1 00:22:18.458 00:22:18.458 ' 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:18.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.458 --rc genhtml_branch_coverage=1 00:22:18.458 --rc genhtml_function_coverage=1 00:22:18.458 --rc genhtml_legend=1 00:22:18.458 --rc geninfo_all_blocks=1 00:22:18.458 --rc geninfo_unexecuted_blocks=1 00:22:18.458 00:22:18.458 ' 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:18.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.458 --rc genhtml_branch_coverage=1 00:22:18.458 --rc genhtml_function_coverage=1 00:22:18.458 --rc genhtml_legend=1 00:22:18.458 --rc geninfo_all_blocks=1 00:22:18.458 --rc geninfo_unexecuted_blocks=1 00:22:18.458 00:22:18.458 ' 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:18.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.458 --rc genhtml_branch_coverage=1 00:22:18.458 --rc genhtml_function_coverage=1 00:22:18.458 --rc genhtml_legend=1 00:22:18.458 --rc geninfo_all_blocks=1 00:22:18.458 --rc geninfo_unexecuted_blocks=1 00:22:18.458 00:22:18.458 ' 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.458 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:18.459 ************************************ 00:22:18.459 START TEST nvmf_shutdown_tc1 00:22:18.459 ************************************ 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:18.459 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.597 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:26.598 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:26.598 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:26.598 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:26.598 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.598 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:22:26.598 00:22:26.598 --- 10.0.0.2 ping statistics --- 00:22:26.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.598 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:26.598 00:22:26.598 --- 10.0.0.1 ping statistics --- 00:22:26.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.598 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=705313 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 705313 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 705313 ']' 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:26.598 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.598 [2024-11-06 13:46:49.215709] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:22:26.598 [2024-11-06 13:46:49.215788] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.598 [2024-11-06 13:46:49.315237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.598 [2024-11-06 13:46:49.367343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.598 [2024-11-06 13:46:49.367394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.598 [2024-11-06 13:46:49.367403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.598 [2024-11-06 13:46:49.367410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.598 [2024-11-06 13:46:49.367416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.598 [2024-11-06 13:46:49.369768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.598 [2024-11-06 13:46:49.369987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.598 [2024-11-06 13:46:49.370151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.598 [2024-11-06 13:46:49.370150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.859 [2024-11-06 13:46:50.073739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.859 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.859 Malloc1 00:22:26.859 [2024-11-06 13:46:50.198507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.859 Malloc2 00:22:27.120 Malloc3 00:22:27.120 Malloc4 00:22:27.120 Malloc5 00:22:27.120 Malloc6 00:22:27.120 Malloc7 00:22:27.120 Malloc8 00:22:27.381 Malloc9 00:22:27.381 Malloc10 00:22:27.381 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.381 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=705624 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 705624 /var/tmp/bdevperf.sock 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 705624 ']' 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.382 { 00:22:27.382 "params": { 00:22:27.382 "name": "Nvme$subsystem", 00:22:27.382 "trtype": "$TEST_TRANSPORT", 00:22:27.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.382 "adrfam": "ipv4", 00:22:27.382 "trsvcid": "$NVMF_PORT", 00:22:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.382 "hdgst": ${hdgst:-false}, 00:22:27.382 "ddgst": ${ddgst:-false} 00:22:27.382 }, 00:22:27.382 "method": "bdev_nvme_attach_controller" 00:22:27.382 } 00:22:27.382 EOF 00:22:27.382 )") 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.382 { 00:22:27.382 "params": { 00:22:27.382 "name": "Nvme$subsystem", 00:22:27.382 "trtype": "$TEST_TRANSPORT", 00:22:27.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.382 "adrfam": "ipv4", 00:22:27.382 "trsvcid": "$NVMF_PORT", 00:22:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.382 "hdgst": ${hdgst:-false}, 00:22:27.382 "ddgst": ${ddgst:-false} 00:22:27.382 }, 00:22:27.382 "method": "bdev_nvme_attach_controller" 00:22:27.382 } 00:22:27.382 EOF 00:22:27.382 )") 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.382 { 00:22:27.382 "params": { 00:22:27.382 "name": "Nvme$subsystem", 00:22:27.382 "trtype": "$TEST_TRANSPORT", 00:22:27.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.382 "adrfam": "ipv4", 00:22:27.382 "trsvcid": "$NVMF_PORT", 00:22:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.382 "hdgst": ${hdgst:-false}, 00:22:27.382 "ddgst": ${ddgst:-false} 00:22:27.382 }, 00:22:27.382 "method": "bdev_nvme_attach_controller" 00:22:27.382 } 00:22:27.382 EOF 00:22:27.382 )") 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.382 { 00:22:27.382 "params": { 00:22:27.382 "name": "Nvme$subsystem", 00:22:27.382 "trtype": "$TEST_TRANSPORT", 00:22:27.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.382 "adrfam": "ipv4", 00:22:27.382 "trsvcid": "$NVMF_PORT", 00:22:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.382 "hdgst": ${hdgst:-false}, 00:22:27.382 "ddgst": ${ddgst:-false} 00:22:27.382 }, 00:22:27.382 "method": "bdev_nvme_attach_controller" 00:22:27.382 } 00:22:27.382 EOF 00:22:27.382 )") 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.382 { 00:22:27.382 "params": { 00:22:27.382 "name": "Nvme$subsystem", 00:22:27.382 "trtype": "$TEST_TRANSPORT", 00:22:27.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.382 "adrfam": "ipv4", 00:22:27.382 "trsvcid": "$NVMF_PORT", 00:22:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.382 "hdgst": ${hdgst:-false}, 00:22:27.382 "ddgst": ${ddgst:-false} 00:22:27.382 }, 00:22:27.382 "method": "bdev_nvme_attach_controller" 00:22:27.382 } 00:22:27.382 EOF 00:22:27.382 )") 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.382 { 00:22:27.382 "params": { 00:22:27.382 "name": "Nvme$subsystem", 00:22:27.382 "trtype": "$TEST_TRANSPORT", 00:22:27.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.382 "adrfam": "ipv4", 00:22:27.382 "trsvcid": "$NVMF_PORT", 00:22:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.382 "hdgst": ${hdgst:-false}, 00:22:27.382 "ddgst": ${ddgst:-false} 00:22:27.382 }, 00:22:27.382 "method": "bdev_nvme_attach_controller" 00:22:27.382 } 00:22:27.382 EOF 00:22:27.382 )") 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.382 [2024-11-06 13:46:50.652028] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:22:27.382 [2024-11-06 13:46:50.652083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.382 { 00:22:27.382 "params": { 00:22:27.382 "name": "Nvme$subsystem", 00:22:27.382 "trtype": "$TEST_TRANSPORT", 00:22:27.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.382 "adrfam": "ipv4", 00:22:27.382 "trsvcid": "$NVMF_PORT", 00:22:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.382 "hdgst": ${hdgst:-false}, 00:22:27.382 "ddgst": ${ddgst:-false} 00:22:27.382 }, 00:22:27.382 "method": "bdev_nvme_attach_controller" 00:22:27.382 } 00:22:27.382 EOF 00:22:27.382 )") 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.382 { 00:22:27.382 "params": { 00:22:27.382 "name": "Nvme$subsystem", 00:22:27.382 "trtype": "$TEST_TRANSPORT", 00:22:27.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.382 "adrfam": "ipv4", 00:22:27.382 "trsvcid": "$NVMF_PORT", 00:22:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.382 "hdgst": ${hdgst:-false}, 00:22:27.382 "ddgst": ${ddgst:-false} 00:22:27.382 }, 00:22:27.382 "method": "bdev_nvme_attach_controller" 00:22:27.382 } 00:22:27.382 EOF 00:22:27.382 )") 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.382 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.382 { 00:22:27.382 "params": { 00:22:27.382 "name": "Nvme$subsystem", 00:22:27.382 "trtype": "$TEST_TRANSPORT", 00:22:27.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.382 "adrfam": "ipv4", 00:22:27.382 "trsvcid": "$NVMF_PORT", 00:22:27.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.383 "hdgst": ${hdgst:-false}, 00:22:27.383 "ddgst": ${ddgst:-false} 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 } 00:22:27.383 EOF 00:22:27.383 )") 00:22:27.383 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.383 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.383 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.383 { 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme$subsystem", 00:22:27.383 "trtype": "$TEST_TRANSPORT", 00:22:27.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "$NVMF_PORT", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.383 "hdgst": ${hdgst:-false}, 00:22:27.383 "ddgst": ${ddgst:-false} 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 } 00:22:27.383 EOF 00:22:27.383 )") 00:22:27.383 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.383 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:27.383 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:27.383 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme1", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 },{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme2", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 },{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme3", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 },{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme4", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 },{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme5", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 },{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme6", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 },{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme7", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 },{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme8", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 },{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme9", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 },{ 00:22:27.383 "params": { 00:22:27.383 "name": "Nvme10", 00:22:27.383 "trtype": "tcp", 00:22:27.383 "traddr": "10.0.0.2", 00:22:27.383 "adrfam": "ipv4", 00:22:27.383 "trsvcid": "4420", 00:22:27.383 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:27.383 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:27.383 "hdgst": false, 00:22:27.383 "ddgst": false 00:22:27.383 }, 00:22:27.383 "method": "bdev_nvme_attach_controller" 00:22:27.383 }' 00:22:27.383 [2024-11-06 13:46:50.724669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.643 [2024-11-06 13:46:50.761125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.027 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:29.027 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:29.027 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:29.027 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.027 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.027 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.027 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 705624 00:22:29.027 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:29.027 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:29.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 705624 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 705313 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.967 { 00:22:29.967 "params": { 00:22:29.967 "name": "Nvme$subsystem", 00:22:29.967 "trtype": "$TEST_TRANSPORT", 00:22:29.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.967 "adrfam": "ipv4", 00:22:29.967 "trsvcid": "$NVMF_PORT", 00:22:29.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.967 "hdgst": ${hdgst:-false}, 00:22:29.967 "ddgst": ${ddgst:-false} 00:22:29.967 }, 00:22:29.967 "method": "bdev_nvme_attach_controller" 00:22:29.967 } 00:22:29.967 EOF 00:22:29.967 )") 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.967 { 00:22:29.967 "params": { 00:22:29.967 "name": "Nvme$subsystem", 00:22:29.967 "trtype": "$TEST_TRANSPORT", 00:22:29.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.967 "adrfam": "ipv4", 00:22:29.967 "trsvcid": "$NVMF_PORT", 00:22:29.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.967 "hdgst": ${hdgst:-false}, 00:22:29.967 "ddgst": ${ddgst:-false} 00:22:29.967 }, 00:22:29.967 "method": "bdev_nvme_attach_controller" 00:22:29.967 } 00:22:29.967 EOF 00:22:29.967 )") 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.967 { 00:22:29.967 "params": { 00:22:29.967 "name": "Nvme$subsystem", 00:22:29.967 "trtype": "$TEST_TRANSPORT", 00:22:29.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.967 "adrfam": "ipv4", 00:22:29.967 "trsvcid": "$NVMF_PORT", 00:22:29.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.967 "hdgst": ${hdgst:-false}, 00:22:29.967 "ddgst": ${ddgst:-false} 00:22:29.967 }, 00:22:29.967 "method": "bdev_nvme_attach_controller" 00:22:29.967 } 00:22:29.967 EOF 00:22:29.967 )") 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.967 { 00:22:29.967 "params": { 00:22:29.967 "name": "Nvme$subsystem", 00:22:29.967 "trtype": "$TEST_TRANSPORT", 00:22:29.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.967 "adrfam": "ipv4", 00:22:29.967 "trsvcid": "$NVMF_PORT", 00:22:29.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.967 "hdgst": ${hdgst:-false}, 00:22:29.967 "ddgst": ${ddgst:-false} 00:22:29.967 }, 00:22:29.967 "method": "bdev_nvme_attach_controller" 00:22:29.967 } 00:22:29.967 EOF 00:22:29.967 )") 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.967 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.967 { 00:22:29.967 "params": { 00:22:29.967 "name": "Nvme$subsystem", 00:22:29.967 "trtype": "$TEST_TRANSPORT", 00:22:29.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.967 "adrfam": "ipv4", 00:22:29.967 "trsvcid": "$NVMF_PORT", 00:22:29.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.967 "hdgst": ${hdgst:-false}, 00:22:29.968 "ddgst": ${ddgst:-false} 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 } 00:22:29.968 EOF 00:22:29.968 )") 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.968 { 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme$subsystem", 00:22:29.968 "trtype": "$TEST_TRANSPORT", 00:22:29.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "$NVMF_PORT", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.968 "hdgst": ${hdgst:-false}, 00:22:29.968 "ddgst": ${ddgst:-false} 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 } 00:22:29.968 EOF 00:22:29.968 )") 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.968 [2024-11-06 13:46:53.287438] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:22:29.968 [2024-11-06 13:46:53.287492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706067 ] 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.968 { 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme$subsystem", 00:22:29.968 "trtype": "$TEST_TRANSPORT", 00:22:29.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "$NVMF_PORT", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.968 "hdgst": ${hdgst:-false}, 00:22:29.968 "ddgst": ${ddgst:-false} 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 } 00:22:29.968 EOF 00:22:29.968 )") 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.968 { 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme$subsystem", 00:22:29.968 "trtype": "$TEST_TRANSPORT", 00:22:29.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "$NVMF_PORT", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.968 "hdgst": ${hdgst:-false}, 00:22:29.968 "ddgst": ${ddgst:-false} 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 } 00:22:29.968 EOF 00:22:29.968 )") 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.968 { 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme$subsystem", 00:22:29.968 "trtype": "$TEST_TRANSPORT", 00:22:29.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "$NVMF_PORT", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.968 "hdgst": ${hdgst:-false}, 00:22:29.968 "ddgst": ${ddgst:-false} 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 } 00:22:29.968 EOF 00:22:29.968 )") 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.968 { 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme$subsystem", 00:22:29.968 "trtype": "$TEST_TRANSPORT", 00:22:29.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "$NVMF_PORT", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.968 "hdgst": ${hdgst:-false}, 00:22:29.968 "ddgst": ${ddgst:-false} 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 } 00:22:29.968 EOF 00:22:29.968 )") 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:29.968 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme1", 00:22:29.968 "trtype": "tcp", 00:22:29.968 "traddr": "10.0.0.2", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "4420", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.968 "hdgst": false, 00:22:29.968 "ddgst": false 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 },{ 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme2", 00:22:29.968 "trtype": "tcp", 00:22:29.968 "traddr": "10.0.0.2", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "4420", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:29.968 "hdgst": false, 00:22:29.968 "ddgst": false 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 },{ 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme3", 00:22:29.968 "trtype": "tcp", 00:22:29.968 "traddr": "10.0.0.2", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "4420", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:29.968 "hdgst": false, 00:22:29.968 "ddgst": false 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 },{ 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme4", 00:22:29.968 "trtype": "tcp", 00:22:29.968 "traddr": "10.0.0.2", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "4420", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:29.968 "hdgst": false, 00:22:29.968 "ddgst": false 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 },{ 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme5", 00:22:29.968 "trtype": "tcp", 00:22:29.968 "traddr": "10.0.0.2", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "4420", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:29.968 "hdgst": false, 00:22:29.968 "ddgst": false 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 },{ 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme6", 00:22:29.968 "trtype": "tcp", 00:22:29.968 "traddr": "10.0.0.2", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "4420", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:29.968 "hdgst": false, 00:22:29.968 "ddgst": false 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 },{ 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme7", 00:22:29.968 "trtype": "tcp", 00:22:29.968 "traddr": "10.0.0.2", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "4420", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:29.968 "hdgst": false, 00:22:29.968 "ddgst": false 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 },{ 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme8", 00:22:29.968 "trtype": "tcp", 00:22:29.968 "traddr": "10.0.0.2", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "4420", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:29.968 "hdgst": false, 00:22:29.968 "ddgst": false 00:22:29.968 }, 00:22:29.968 "method": "bdev_nvme_attach_controller" 00:22:29.968 },{ 00:22:29.968 "params": { 00:22:29.968 "name": "Nvme9", 00:22:29.968 "trtype": "tcp", 00:22:29.968 "traddr": "10.0.0.2", 00:22:29.968 "adrfam": "ipv4", 00:22:29.968 "trsvcid": "4420", 00:22:29.968 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:29.968 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:29.968 "hdgst": false, 00:22:29.968 "ddgst": false 00:22:29.968 }, 00:22:29.969 "method": "bdev_nvme_attach_controller" 00:22:29.969 },{ 00:22:29.969 "params": { 00:22:29.969 "name": "Nvme10", 00:22:29.969 "trtype": "tcp", 00:22:29.969 "traddr": "10.0.0.2", 00:22:29.969 "adrfam": "ipv4", 00:22:29.969 "trsvcid": "4420", 00:22:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:29.969 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:29.969 "hdgst": false, 00:22:29.969 "ddgst": false 00:22:29.969 }, 00:22:29.969 "method": "bdev_nvme_attach_controller" 00:22:29.969 }' 00:22:30.229 [2024-11-06 13:46:53.359636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.229 [2024-11-06 13:46:53.396042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.611 Running I/O for 1 seconds... 00:22:32.812 1862.00 IOPS, 116.38 MiB/s 00:22:32.812 Latency(us) 00:22:32.812 [2024-11-06T12:46:56.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.812 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme1n1 : 1.15 221.66 13.85 0.00 0.00 284729.17 18022.40 255153.49 00:22:32.812 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme2n1 : 1.11 230.61 14.41 0.00 0.00 269784.32 21845.33 246415.36 00:22:32.812 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme3n1 : 1.10 244.59 15.29 0.00 0.00 243040.26 13762.56 262144.00 00:22:32.812 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme4n1 : 1.10 236.73 14.80 0.00 0.00 247711.59 18677.76 228939.09 00:22:32.812 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme5n1 : 1.15 221.90 13.87 0.00 0.00 265321.81 20753.07 251658.24 00:22:32.812 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme6n1 : 1.14 227.35 14.21 0.00 0.00 253781.02 5242.88 251658.24 00:22:32.812 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme7n1 : 1.18 271.40 16.96 0.00 0.00 210241.62 11851.09 256901.12 00:22:32.812 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme8n1 : 1.14 224.37 14.02 0.00 0.00 248444.37 21626.88 251658.24 00:22:32.812 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme9n1 : 1.19 269.47 16.84 0.00 0.00 204266.15 15619.41 248162.99 00:22:32.812 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.812 Verification LBA range: start 0x0 length 0x400 00:22:32.812 Nvme10n1 : 1.20 266.99 16.69 0.00 0.00 202584.92 9502.72 269134.51 00:22:32.812 [2024-11-06T12:46:56.188Z] =================================================================================================================== 00:22:32.812 [2024-11-06T12:46:56.188Z] Total : 2415.05 150.94 0.00 0.00 240431.94 5242.88 269134.51 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.072 rmmod nvme_tcp 00:22:33.072 rmmod nvme_fabrics 00:22:33.072 rmmod nvme_keyring 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 705313 ']' 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 705313 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 705313 ']' 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 705313 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 705313 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 705313' 00:22:33.072 killing process with pid 705313 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 705313 00:22:33.072 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 705313 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.332 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.873 00:22:35.873 real 0m16.884s 00:22:35.873 user 0m35.133s 00:22:35.873 sys 0m6.705s 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.873 ************************************ 00:22:35.873 END TEST nvmf_shutdown_tc1 00:22:35.873 ************************************ 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:35.873 ************************************ 00:22:35.873 START TEST nvmf_shutdown_tc2 00:22:35.873 ************************************ 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.873 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:35.874 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:35.874 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:35.874 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:35.874 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:35.874 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.875 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:22:35.875 00:22:35.875 --- 10.0.0.2 ping statistics --- 00:22:35.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.875 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:22:35.875 00:22:35.875 --- 10.0.0.1 ping statistics --- 00:22:35.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.875 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=707389 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 707389 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 707389 ']' 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:35.875 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.875 [2024-11-06 13:46:59.140891] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:22:35.875 [2024-11-06 13:46:59.140945] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.875 [2024-11-06 13:46:59.228213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.135 [2024-11-06 13:46:59.259287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.135 [2024-11-06 13:46:59.259313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.135 [2024-11-06 13:46:59.259318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.135 [2024-11-06 13:46:59.259323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.135 [2024-11-06 13:46:59.259328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.135 [2024-11-06 13:46:59.260520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.135 [2024-11-06 13:46:59.260678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.135 [2024-11-06 13:46:59.260821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.135 [2024-11-06 13:46:59.260823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.706 [2024-11-06 13:46:59.992900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:36.706 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.706 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.968 Malloc1 00:22:36.968 [2024-11-06 13:47:00.112369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.968 Malloc2 00:22:36.968 Malloc3 00:22:36.968 Malloc4 00:22:36.968 Malloc5 00:22:36.968 Malloc6 00:22:36.968 Malloc7 00:22:37.230 Malloc8 00:22:37.230 Malloc9 00:22:37.230 Malloc10 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=707618 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 707618 /var/tmp/bdevperf.sock 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 707618 ']' 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.231 { 00:22:37.231 "params": { 00:22:37.231 "name": "Nvme$subsystem", 00:22:37.231 "trtype": "$TEST_TRANSPORT", 00:22:37.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.231 "adrfam": "ipv4", 00:22:37.231 "trsvcid": "$NVMF_PORT", 00:22:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.231 "hdgst": ${hdgst:-false}, 00:22:37.231 "ddgst": ${ddgst:-false} 00:22:37.231 }, 00:22:37.231 "method": "bdev_nvme_attach_controller" 00:22:37.231 } 00:22:37.231 EOF 00:22:37.231 )") 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.231 { 00:22:37.231 "params": { 00:22:37.231 "name": "Nvme$subsystem", 00:22:37.231 "trtype": "$TEST_TRANSPORT", 00:22:37.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.231 "adrfam": "ipv4", 00:22:37.231 "trsvcid": "$NVMF_PORT", 00:22:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.231 "hdgst": ${hdgst:-false}, 00:22:37.231 "ddgst": ${ddgst:-false} 00:22:37.231 }, 00:22:37.231 "method": "bdev_nvme_attach_controller" 00:22:37.231 } 00:22:37.231 EOF 00:22:37.231 )") 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.231 { 00:22:37.231 "params": { 00:22:37.231 "name": "Nvme$subsystem", 00:22:37.231 "trtype": "$TEST_TRANSPORT", 00:22:37.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.231 "adrfam": "ipv4", 00:22:37.231 "trsvcid": "$NVMF_PORT", 00:22:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.231 "hdgst": ${hdgst:-false}, 00:22:37.231 "ddgst": ${ddgst:-false} 00:22:37.231 }, 00:22:37.231 "method": "bdev_nvme_attach_controller" 00:22:37.231 } 00:22:37.231 EOF 00:22:37.231 )") 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.231 { 00:22:37.231 "params": { 00:22:37.231 "name": "Nvme$subsystem", 00:22:37.231 "trtype": "$TEST_TRANSPORT", 00:22:37.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.231 "adrfam": "ipv4", 00:22:37.231 "trsvcid": "$NVMF_PORT", 00:22:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.231 "hdgst": ${hdgst:-false}, 00:22:37.231 "ddgst": ${ddgst:-false} 00:22:37.231 }, 00:22:37.231 "method": "bdev_nvme_attach_controller" 00:22:37.231 } 00:22:37.231 EOF 00:22:37.231 )") 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.231 { 00:22:37.231 "params": { 00:22:37.231 "name": "Nvme$subsystem", 00:22:37.231 "trtype": "$TEST_TRANSPORT", 00:22:37.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.231 "adrfam": "ipv4", 00:22:37.231 "trsvcid": "$NVMF_PORT", 00:22:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.231 "hdgst": ${hdgst:-false}, 00:22:37.231 "ddgst": ${ddgst:-false} 00:22:37.231 }, 00:22:37.231 "method": "bdev_nvme_attach_controller" 00:22:37.231 } 00:22:37.231 EOF 00:22:37.231 )") 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.231 { 00:22:37.231 "params": { 00:22:37.231 "name": "Nvme$subsystem", 00:22:37.231 "trtype": "$TEST_TRANSPORT", 00:22:37.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.231 "adrfam": "ipv4", 00:22:37.231 "trsvcid": "$NVMF_PORT", 00:22:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.231 "hdgst": ${hdgst:-false}, 00:22:37.231 "ddgst": ${ddgst:-false} 00:22:37.231 }, 00:22:37.231 "method": "bdev_nvme_attach_controller" 00:22:37.231 } 00:22:37.231 EOF 00:22:37.231 )") 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.231 { 00:22:37.231 "params": { 00:22:37.231 "name": "Nvme$subsystem", 00:22:37.231 "trtype": "$TEST_TRANSPORT", 00:22:37.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.231 "adrfam": "ipv4", 00:22:37.231 "trsvcid": "$NVMF_PORT", 00:22:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.231 "hdgst": ${hdgst:-false}, 00:22:37.231 "ddgst": ${ddgst:-false} 00:22:37.231 }, 00:22:37.231 "method": "bdev_nvme_attach_controller" 00:22:37.231 } 00:22:37.231 EOF 00:22:37.231 )") 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.231 { 00:22:37.231 "params": { 00:22:37.231 "name": "Nvme$subsystem", 00:22:37.231 "trtype": "$TEST_TRANSPORT", 00:22:37.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.231 "adrfam": "ipv4", 00:22:37.231 "trsvcid": "$NVMF_PORT", 00:22:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.231 "hdgst": ${hdgst:-false}, 00:22:37.231 "ddgst": ${ddgst:-false} 00:22:37.231 }, 00:22:37.231 "method": "bdev_nvme_attach_controller" 00:22:37.231 } 00:22:37.231 EOF 00:22:37.231 )") 00:22:37.231 [2024-11-06 13:47:00.571258] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:22:37.231 [2024-11-06 13:47:00.571327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707618 ] 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.231 { 00:22:37.231 "params": { 00:22:37.231 "name": "Nvme$subsystem", 00:22:37.231 "trtype": "$TEST_TRANSPORT", 00:22:37.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.231 "adrfam": "ipv4", 00:22:37.231 "trsvcid": "$NVMF_PORT", 00:22:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.231 "hdgst": ${hdgst:-false}, 00:22:37.231 "ddgst": ${ddgst:-false} 00:22:37.231 }, 00:22:37.231 "method": "bdev_nvme_attach_controller" 00:22:37.231 } 00:22:37.231 EOF 00:22:37.231 )") 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.231 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:37.232 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:37.232 { 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme$subsystem", 00:22:37.232 "trtype": "$TEST_TRANSPORT", 00:22:37.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "$NVMF_PORT", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.232 "hdgst": ${hdgst:-false}, 00:22:37.232 "ddgst": ${ddgst:-false} 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 } 00:22:37.232 EOF 00:22:37.232 )") 00:22:37.232 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:37.232 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:37.232 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:37.232 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme1", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 },{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme2", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 },{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme3", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 },{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme4", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 },{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme5", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 },{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme6", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 },{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme7", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 },{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme8", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 },{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme9", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 },{ 00:22:37.232 "params": { 00:22:37.232 "name": "Nvme10", 00:22:37.232 "trtype": "tcp", 00:22:37.232 "traddr": "10.0.0.2", 00:22:37.232 "adrfam": "ipv4", 00:22:37.232 "trsvcid": "4420", 00:22:37.232 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:37.232 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:37.232 "hdgst": false, 00:22:37.232 "ddgst": false 00:22:37.232 }, 00:22:37.232 "method": "bdev_nvme_attach_controller" 00:22:37.232 }' 00:22:37.492 [2024-11-06 13:47:00.644558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.492 [2024-11-06 13:47:00.681150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.874 Running I/O for 10 seconds... 00:22:38.874 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:38.874 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:38.874 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:38.874 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.874 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:39.134 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:39.394 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 707618 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 707618 ']' 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 707618 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:39.655 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 707618 00:22:39.915 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:39.915 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:39.915 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 707618' 00:22:39.915 killing process with pid 707618 00:22:39.915 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 707618 00:22:39.915 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 707618 00:22:39.915 Received shutdown signal, test time was about 0.977593 seconds 00:22:39.915 00:22:39.915 Latency(us) 00:22:39.915 [2024-11-06T12:47:03.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.915 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme1n1 : 0.95 202.33 12.65 0.00 0.00 312734.44 26978.99 258648.75 00:22:39.915 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme2n1 : 0.98 262.12 16.38 0.00 0.00 235663.89 7427.41 256901.12 00:22:39.915 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme3n1 : 0.96 265.45 16.59 0.00 0.00 228742.61 14199.47 260396.37 00:22:39.915 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme4n1 : 0.97 264.40 16.52 0.00 0.00 225064.32 17913.17 256901.12 00:22:39.915 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme5n1 : 0.97 262.67 16.42 0.00 0.00 220527.36 12888.75 251658.24 00:22:39.915 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme6n1 : 0.95 202.09 12.63 0.00 0.00 280298.38 35826.35 241172.48 00:22:39.915 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme7n1 : 0.97 263.67 16.48 0.00 0.00 211478.19 22719.15 251658.24 00:22:39.915 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme8n1 : 0.96 270.78 16.92 0.00 0.00 200067.44 5406.72 232434.35 00:22:39.915 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme9n1 : 0.95 201.69 12.61 0.00 0.00 262924.80 15073.28 251658.24 00:22:39.915 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.915 Verification LBA range: start 0x0 length 0x400 00:22:39.915 Nvme10n1 : 0.96 200.33 12.52 0.00 0.00 258920.11 18568.53 272629.76 00:22:39.915 [2024-11-06T12:47:03.291Z] =================================================================================================================== 00:22:39.915 [2024-11-06T12:47:03.291Z] Total : 2395.54 149.72 0.00 0.00 239675.91 5406.72 272629.76 00:22:39.915 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 707389 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.299 rmmod nvme_tcp 00:22:41.299 rmmod nvme_fabrics 00:22:41.299 rmmod nvme_keyring 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 707389 ']' 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 707389 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 707389 ']' 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 707389 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 707389 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 707389' 00:22:41.299 killing process with pid 707389 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 707389 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 707389 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:41.299 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.300 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.300 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.300 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.300 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.300 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.845 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.845 00:22:43.845 real 0m7.977s 00:22:43.846 user 0m24.365s 00:22:43.846 sys 0m1.249s 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.846 ************************************ 00:22:43.846 END TEST nvmf_shutdown_tc2 00:22:43.846 ************************************ 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:43.846 ************************************ 00:22:43.846 START TEST nvmf_shutdown_tc3 00:22:43.846 ************************************ 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:43.846 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:43.846 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:43.846 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:43.846 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.846 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.847 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:22:43.847 00:22:43.847 --- 10.0.0.2 ping statistics --- 00:22:43.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.847 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:22:43.847 00:22:43.847 --- 10.0.0.1 ping statistics --- 00:22:43.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.847 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=709032 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 709032 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 709032 ']' 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:43.847 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.107 [2024-11-06 13:47:07.224628] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:22:44.107 [2024-11-06 13:47:07.224701] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.107 [2024-11-06 13:47:07.320617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.108 [2024-11-06 13:47:07.356308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.108 [2024-11-06 13:47:07.356337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.108 [2024-11-06 13:47:07.356343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.108 [2024-11-06 13:47:07.356348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.108 [2024-11-06 13:47:07.356352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.108 [2024-11-06 13:47:07.357844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.108 [2024-11-06 13:47:07.358161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.108 [2024-11-06 13:47:07.358285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.108 [2024-11-06 13:47:07.358285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.727 [2024-11-06 13:47:08.076936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.727 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.988 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.988 Malloc1 00:22:44.988 [2024-11-06 13:47:08.183112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.988 Malloc2 00:22:44.988 Malloc3 00:22:44.988 Malloc4 00:22:44.988 Malloc5 00:22:44.988 Malloc6 00:22:45.248 Malloc7 00:22:45.248 Malloc8 00:22:45.248 Malloc9 00:22:45.248 Malloc10 00:22:45.248 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.248 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:45.248 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=709422 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 709422 /var/tmp/bdevperf.sock 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 709422 ']' 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.249 { 00:22:45.249 "params": { 00:22:45.249 "name": "Nvme$subsystem", 00:22:45.249 "trtype": "$TEST_TRANSPORT", 00:22:45.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.249 "adrfam": "ipv4", 00:22:45.249 "trsvcid": "$NVMF_PORT", 00:22:45.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.249 "hdgst": ${hdgst:-false}, 00:22:45.249 "ddgst": ${ddgst:-false} 00:22:45.249 }, 00:22:45.249 "method": "bdev_nvme_attach_controller" 00:22:45.249 } 00:22:45.249 EOF 00:22:45.249 )") 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.249 { 00:22:45.249 "params": { 00:22:45.249 "name": "Nvme$subsystem", 00:22:45.249 "trtype": "$TEST_TRANSPORT", 00:22:45.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.249 "adrfam": "ipv4", 00:22:45.249 "trsvcid": "$NVMF_PORT", 00:22:45.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.249 "hdgst": ${hdgst:-false}, 00:22:45.249 "ddgst": ${ddgst:-false} 00:22:45.249 }, 00:22:45.249 "method": "bdev_nvme_attach_controller" 00:22:45.249 } 00:22:45.249 EOF 00:22:45.249 )") 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.249 { 00:22:45.249 "params": { 00:22:45.249 "name": "Nvme$subsystem", 00:22:45.249 "trtype": "$TEST_TRANSPORT", 00:22:45.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.249 "adrfam": "ipv4", 00:22:45.249 "trsvcid": "$NVMF_PORT", 00:22:45.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.249 "hdgst": ${hdgst:-false}, 00:22:45.249 "ddgst": ${ddgst:-false} 00:22:45.249 }, 00:22:45.249 "method": "bdev_nvme_attach_controller" 00:22:45.249 } 00:22:45.249 EOF 00:22:45.249 )") 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.249 { 00:22:45.249 "params": { 00:22:45.249 "name": "Nvme$subsystem", 00:22:45.249 "trtype": "$TEST_TRANSPORT", 00:22:45.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.249 "adrfam": "ipv4", 00:22:45.249 "trsvcid": "$NVMF_PORT", 00:22:45.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.249 "hdgst": ${hdgst:-false}, 00:22:45.249 "ddgst": ${ddgst:-false} 00:22:45.249 }, 00:22:45.249 "method": "bdev_nvme_attach_controller" 00:22:45.249 } 00:22:45.249 EOF 00:22:45.249 )") 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.249 { 00:22:45.249 "params": { 00:22:45.249 "name": "Nvme$subsystem", 00:22:45.249 "trtype": "$TEST_TRANSPORT", 00:22:45.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.249 "adrfam": "ipv4", 00:22:45.249 "trsvcid": "$NVMF_PORT", 00:22:45.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.249 "hdgst": ${hdgst:-false}, 00:22:45.249 "ddgst": ${ddgst:-false} 00:22:45.249 }, 00:22:45.249 "method": "bdev_nvme_attach_controller" 00:22:45.249 } 00:22:45.249 EOF 00:22:45.249 )") 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.249 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.249 { 00:22:45.249 "params": { 00:22:45.249 "name": "Nvme$subsystem", 00:22:45.249 "trtype": "$TEST_TRANSPORT", 00:22:45.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.250 "adrfam": "ipv4", 00:22:45.250 "trsvcid": "$NVMF_PORT", 00:22:45.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.250 "hdgst": ${hdgst:-false}, 00:22:45.250 "ddgst": ${ddgst:-false} 00:22:45.250 }, 00:22:45.250 "method": "bdev_nvme_attach_controller" 00:22:45.250 } 00:22:45.250 EOF 00:22:45.250 )") 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.510 [2024-11-06 13:47:08.627466] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:22:45.510 [2024-11-06 13:47:08.627523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709422 ] 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.510 { 00:22:45.510 "params": { 00:22:45.510 "name": "Nvme$subsystem", 00:22:45.510 "trtype": "$TEST_TRANSPORT", 00:22:45.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.510 "adrfam": "ipv4", 00:22:45.510 "trsvcid": "$NVMF_PORT", 00:22:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.510 "hdgst": ${hdgst:-false}, 00:22:45.510 "ddgst": ${ddgst:-false} 00:22:45.510 }, 00:22:45.510 "method": "bdev_nvme_attach_controller" 00:22:45.510 } 00:22:45.510 EOF 00:22:45.510 )") 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.510 { 00:22:45.510 "params": { 00:22:45.510 "name": "Nvme$subsystem", 00:22:45.510 "trtype": "$TEST_TRANSPORT", 00:22:45.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.510 "adrfam": "ipv4", 00:22:45.510 "trsvcid": "$NVMF_PORT", 00:22:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.510 "hdgst": ${hdgst:-false}, 00:22:45.510 "ddgst": ${ddgst:-false} 00:22:45.510 }, 00:22:45.510 "method": "bdev_nvme_attach_controller" 00:22:45.510 } 00:22:45.510 EOF 00:22:45.510 )") 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.510 { 00:22:45.510 "params": { 00:22:45.510 "name": "Nvme$subsystem", 00:22:45.510 "trtype": "$TEST_TRANSPORT", 00:22:45.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.510 "adrfam": "ipv4", 00:22:45.510 "trsvcid": "$NVMF_PORT", 00:22:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.510 "hdgst": ${hdgst:-false}, 00:22:45.510 "ddgst": ${ddgst:-false} 00:22:45.510 }, 00:22:45.510 "method": "bdev_nvme_attach_controller" 00:22:45.510 } 00:22:45.510 EOF 00:22:45.510 )") 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.510 { 00:22:45.510 "params": { 00:22:45.510 "name": "Nvme$subsystem", 00:22:45.510 "trtype": "$TEST_TRANSPORT", 00:22:45.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.510 "adrfam": "ipv4", 00:22:45.510 "trsvcid": "$NVMF_PORT", 00:22:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.510 "hdgst": ${hdgst:-false}, 00:22:45.510 "ddgst": ${ddgst:-false} 00:22:45.510 }, 00:22:45.510 "method": "bdev_nvme_attach_controller" 00:22:45.510 } 00:22:45.510 EOF 00:22:45.510 )") 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:45.510 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:45.510 "params": { 00:22:45.510 "name": "Nvme1", 00:22:45.510 "trtype": "tcp", 00:22:45.510 "traddr": "10.0.0.2", 00:22:45.510 "adrfam": "ipv4", 00:22:45.510 "trsvcid": "4420", 00:22:45.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.510 "hdgst": false, 00:22:45.510 "ddgst": false 00:22:45.510 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 },{ 00:22:45.511 "params": { 00:22:45.511 "name": "Nvme2", 00:22:45.511 "trtype": "tcp", 00:22:45.511 "traddr": "10.0.0.2", 00:22:45.511 "adrfam": "ipv4", 00:22:45.511 "trsvcid": "4420", 00:22:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.511 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:45.511 "hdgst": false, 00:22:45.511 "ddgst": false 00:22:45.511 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 },{ 00:22:45.511 "params": { 00:22:45.511 "name": "Nvme3", 00:22:45.511 "trtype": "tcp", 00:22:45.511 "traddr": "10.0.0.2", 00:22:45.511 "adrfam": "ipv4", 00:22:45.511 "trsvcid": "4420", 00:22:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:45.511 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:45.511 "hdgst": false, 00:22:45.511 "ddgst": false 00:22:45.511 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 },{ 00:22:45.511 "params": { 00:22:45.511 "name": "Nvme4", 00:22:45.511 "trtype": "tcp", 00:22:45.511 "traddr": "10.0.0.2", 00:22:45.511 "adrfam": "ipv4", 00:22:45.511 "trsvcid": "4420", 00:22:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:45.511 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:45.511 "hdgst": false, 00:22:45.511 "ddgst": false 00:22:45.511 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 },{ 00:22:45.511 "params": { 00:22:45.511 "name": "Nvme5", 00:22:45.511 "trtype": "tcp", 00:22:45.511 "traddr": "10.0.0.2", 00:22:45.511 "adrfam": "ipv4", 00:22:45.511 "trsvcid": "4420", 00:22:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:45.511 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:45.511 "hdgst": false, 00:22:45.511 "ddgst": false 00:22:45.511 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 },{ 00:22:45.511 "params": { 00:22:45.511 "name": "Nvme6", 00:22:45.511 "trtype": "tcp", 00:22:45.511 "traddr": "10.0.0.2", 00:22:45.511 "adrfam": "ipv4", 00:22:45.511 "trsvcid": "4420", 00:22:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:45.511 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:45.511 "hdgst": false, 00:22:45.511 "ddgst": false 00:22:45.511 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 },{ 00:22:45.511 "params": { 00:22:45.511 "name": "Nvme7", 00:22:45.511 "trtype": "tcp", 00:22:45.511 "traddr": "10.0.0.2", 00:22:45.511 "adrfam": "ipv4", 00:22:45.511 "trsvcid": "4420", 00:22:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:45.511 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:45.511 "hdgst": false, 00:22:45.511 "ddgst": false 00:22:45.511 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 },{ 00:22:45.511 "params": { 00:22:45.511 "name": "Nvme8", 00:22:45.511 "trtype": "tcp", 00:22:45.511 "traddr": "10.0.0.2", 00:22:45.511 "adrfam": "ipv4", 00:22:45.511 "trsvcid": "4420", 00:22:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:45.511 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:45.511 "hdgst": false, 00:22:45.511 "ddgst": false 00:22:45.511 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 },{ 00:22:45.511 "params": { 00:22:45.511 "name": "Nvme9", 00:22:45.511 "trtype": "tcp", 00:22:45.511 "traddr": "10.0.0.2", 00:22:45.511 "adrfam": "ipv4", 00:22:45.511 "trsvcid": "4420", 00:22:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:45.511 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:45.511 "hdgst": false, 00:22:45.511 "ddgst": false 00:22:45.511 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 },{ 00:22:45.511 "params": { 00:22:45.511 "name": "Nvme10", 00:22:45.511 "trtype": "tcp", 00:22:45.511 "traddr": "10.0.0.2", 00:22:45.511 "adrfam": "ipv4", 00:22:45.511 "trsvcid": "4420", 00:22:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:45.511 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:45.511 "hdgst": false, 00:22:45.511 "ddgst": false 00:22:45.511 }, 00:22:45.511 "method": "bdev_nvme_attach_controller" 00:22:45.511 }' 00:22:45.511 [2024-11-06 13:47:08.699003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.511 [2024-11-06 13:47:08.735085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.419 Running I/O for 10 seconds... 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:47.419 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.420 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.420 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.420 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.420 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.420 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:47.420 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:47.420 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:47.680 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:47.680 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:47.680 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.680 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.680 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.680 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.680 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.680 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:47.681 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:47.681 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=139 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 139 -ge 100 ']' 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 709032 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 709032 ']' 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 709032 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 709032 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 709032' 00:22:47.959 killing process with pid 709032 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 709032 00:22:47.959 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 709032 00:22:47.959 [2024-11-06 13:47:11.258095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.959 [2024-11-06 13:47:11.258145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.959 [2024-11-06 13:47:11.258151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.959 [2024-11-06 13:47:11.258156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.959 [2024-11-06 13:47:11.258161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.258444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174640 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.960 [2024-11-06 13:47:11.259651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.259879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21770a0 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.961 [2024-11-06 13:47:11.260985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.260990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.260994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.260999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.261116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174b10 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.962 [2024-11-06 13:47:11.262533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.262669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174fe0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.963 [2024-11-06 13:47:11.263723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.263728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.263733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.263738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.263743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.263752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.263757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.263761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.263766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754d0 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.264961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.264972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.264977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.264983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.264989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.264994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.264999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.964 [2024-11-06 13:47:11.265229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175d20 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.265962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21761f0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21766e0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.266852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.965 [2024-11-06 13:47:11.272633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.965 [2024-11-06 13:47:11.272668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1807730 is same with the state(6) to be set 00:22:47.966 [2024-11-06 13:47:11.272779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b8610 is same with the state(6) to be set 00:22:47.966 [2024-11-06 13:47:11.272869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8b0 is same with the state(6) to be set 00:22:47.966 [2024-11-06 13:47:11.272958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.272987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.272996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139d420 is same with the state(6) to be set 00:22:47.966 [2024-11-06 13:47:11.273049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a0cb0 is same with the state(6) to be set 00:22:47.966 [2024-11-06 13:47:11.273133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e810 is same with the state(6) to be set 00:22:47.966 [2024-11-06 13:47:11.273218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cc750 is same with the state(6) to be set 00:22:47.966 [2024-11-06 13:47:11.273307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13979f0 is same with the state(6) to be set 00:22:47.966 [2024-11-06 13:47:11.273393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.966 [2024-11-06 13:47:11.273448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.966 [2024-11-06 13:47:11.273455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cc280 is same with the state(6) to be set 00:22:47.966 [2024-11-06 13:47:11.273978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.273999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.967 [2024-11-06 13:47:11.274651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.967 [2024-11-06 13:47:11.274658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.274988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.274997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.968 [2024-11-06 13:47:11.275223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with [2024-11-06 13:47:11.275348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12the state(6) to be set 00:22:47.968 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.968 [2024-11-06 13:47:11.275368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.968 [2024-11-06 13:47:11.275368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with [2024-11-06 13:47:11.275376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:47.968 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.968 [2024-11-06 13:47:11.275389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 [2024-11-06 13:47:11.275393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.968 [2024-11-06 13:47:11.275397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.968 [2024-11-06 13:47:11.275400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.968 [2024-11-06 13:47:11.275407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1[2024-11-06 13:47:11.275408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.968 the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with [2024-11-06 13:47:11.275416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:47.969 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1[2024-11-06 13:47:11.275484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176bb0 is same with the state(6) to be set 00:22:47.969 [2024-11-06 13:47:11.275502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.275818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.275825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.284514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.284547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.284559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.284567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.969 [2024-11-06 13:47:11.284577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.969 [2024-11-06 13:47:11.284585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.284987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.284996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.970 [2024-11-06 13:47:11.285396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.970 [2024-11-06 13:47:11.285404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.285990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.285999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.286009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.286017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.286026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.286034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.286048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.286056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.286066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.286073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.286082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.286090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.971 [2024-11-06 13:47:11.286100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.971 [2024-11-06 13:47:11.286108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.286320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1807730 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.286583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.972 [2024-11-06 13:47:11.286593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.972 [2024-11-06 13:47:11.286610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.972 [2024-11-06 13:47:11.286627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.972 [2024-11-06 13:47:11.286643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.286651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fab10 is same with the state(6) to be set 00:22:47.972 [2024-11-06 13:47:11.286671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b8610 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.286689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa8b0 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.286703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139d420 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.286717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a0cb0 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.286731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139e810 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.286759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cc750 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.286778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13979f0 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.286794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cc280 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.291047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:47.972 [2024-11-06 13:47:11.291079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:47.972 [2024-11-06 13:47:11.291094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:47.972 [2024-11-06 13:47:11.292178] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.972 [2024-11-06 13:47:11.292515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.972 [2024-11-06 13:47:11.292535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cc280 with addr=10.0.0.2, port=4420 00:22:47.972 [2024-11-06 13:47:11.292545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cc280 is same with the state(6) to be set 00:22:47.972 [2024-11-06 13:47:11.293013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.972 [2024-11-06 13:47:11.293054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139d420 with addr=10.0.0.2, port=4420 00:22:47.972 [2024-11-06 13:47:11.293066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139d420 is same with the state(6) to be set 00:22:47.972 [2024-11-06 13:47:11.293345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.972 [2024-11-06 13:47:11.293358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17fa8b0 with addr=10.0.0.2, port=4420 00:22:47.972 [2024-11-06 13:47:11.293365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8b0 is same with the state(6) to be set 00:22:47.972 [2024-11-06 13:47:11.293417] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.972 [2024-11-06 13:47:11.293460] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.972 [2024-11-06 13:47:11.293499] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.972 [2024-11-06 13:47:11.293537] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.972 [2024-11-06 13:47:11.293592] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.972 [2024-11-06 13:47:11.293645] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.972 [2024-11-06 13:47:11.293679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cc280 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.293695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139d420 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.293704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa8b0 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.293806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:47.972 [2024-11-06 13:47:11.293818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:47.972 [2024-11-06 13:47:11.293827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:47.972 [2024-11-06 13:47:11.293837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:47.972 [2024-11-06 13:47:11.293845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:47.972 [2024-11-06 13:47:11.293852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:47.972 [2024-11-06 13:47:11.293859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:47.972 [2024-11-06 13:47:11.293866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:47.972 [2024-11-06 13:47:11.293873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:47.972 [2024-11-06 13:47:11.293879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:47.972 [2024-11-06 13:47:11.293886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:47.972 [2024-11-06 13:47:11.293898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:47.972 [2024-11-06 13:47:11.296527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fab10 (9): Bad file descriptor 00:22:47.972 [2024-11-06 13:47:11.296677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.296691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.296706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.296715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.296726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.296733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.296743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.972 [2024-11-06 13:47:11.296758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.972 [2024-11-06 13:47:11.296767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.296985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.296993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.973 [2024-11-06 13:47:11.297398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.973 [2024-11-06 13:47:11.297406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.297844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.297854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e7a0 is same with the state(6) to be set 00:22:47.974 [2024-11-06 13:47:11.299141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.974 [2024-11-06 13:47:11.299415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.974 [2024-11-06 13:47:11.299424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.299986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.299996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.300004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.300013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.300023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.300032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.300041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.300050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.300058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.300068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.300076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.300086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.300093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.300103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.300111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.300121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.300128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.975 [2024-11-06 13:47:11.300138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.975 [2024-11-06 13:47:11.300145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.300163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.300181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.300199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.300218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.300236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.300255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.300272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.300289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.300307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.300316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4900 is same with the state(6) to be set 00:22:47.976 [2024-11-06 13:47:11.301583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.301987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.301998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.302006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.302016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.302023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.302034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.302043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.302053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.302061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.302071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.302079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.302089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.302097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.302107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.302115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.976 [2024-11-06 13:47:11.302125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.976 [2024-11-06 13:47:11.302133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.302755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.302763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5be0 is same with the state(6) to be set 00:22:47.977 [2024-11-06 13:47:11.304039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.304054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.304066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.304075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.304085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.304093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.304104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.304112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.304122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.977 [2024-11-06 13:47:11.304129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.977 [2024-11-06 13:47:11.304139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.978 [2024-11-06 13:47:11.304757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.978 [2024-11-06 13:47:11.304765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.304985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.304993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.305189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.305197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a0e70 is same with the state(6) to be set 00:22:47.979 [2024-11-06 13:47:11.306464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.979 [2024-11-06 13:47:11.306756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.979 [2024-11-06 13:47:11.306766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.306989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.306998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.980 [2024-11-06 13:47:11.307464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.980 [2024-11-06 13:47:11.307475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.307492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.307510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.307529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.307547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.307566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.307584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.307602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.307620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.307637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.307646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3890 is same with the state(6) to be set 00:22:47.981 [2024-11-06 13:47:11.308937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.308952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.308964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.308971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.308981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.308994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.981 [2024-11-06 13:47:11.309462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.981 [2024-11-06 13:47:11.309472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.309986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.309996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.310003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.310013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.310021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.310030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.310038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.310047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.310055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.982 [2024-11-06 13:47:11.310065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.982 [2024-11-06 13:47:11.310073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.310081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e11f0 is same with the state(6) to be set 00:22:47.983 [2024-11-06 13:47:11.311331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:47.983 [2024-11-06 13:47:11.311349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:47.983 [2024-11-06 13:47:11.311364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:47.983 [2024-11-06 13:47:11.311377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:47.983 [2024-11-06 13:47:11.311457] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:47.983 [2024-11-06 13:47:11.311477] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:47.983 [2024-11-06 13:47:11.311558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:47.983 [2024-11-06 13:47:11.311570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:47.983 [2024-11-06 13:47:11.312076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.983 [2024-11-06 13:47:11.312120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a0cb0 with addr=10.0.0.2, port=4420 00:22:47.983 [2024-11-06 13:47:11.312134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a0cb0 is same with the state(6) to be set 00:22:47.983 [2024-11-06 13:47:11.312473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.983 [2024-11-06 13:47:11.312486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13979f0 with addr=10.0.0.2, port=4420 00:22:47.983 [2024-11-06 13:47:11.312494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13979f0 is same with the state(6) to be set 00:22:47.983 [2024-11-06 13:47:11.312962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.983 [2024-11-06 13:47:11.313001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139e810 with addr=10.0.0.2, port=4420 00:22:47.983 [2024-11-06 13:47:11.313012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e810 is same with the state(6) to be set 00:22:47.983 [2024-11-06 13:47:11.313343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.983 [2024-11-06 13:47:11.313356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cc750 with addr=10.0.0.2, port=4420 00:22:47.983 [2024-11-06 13:47:11.313364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cc750 is same with the state(6) to be set 00:22:47.983 [2024-11-06 13:47:11.314726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.314984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.314994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.983 [2024-11-06 13:47:11.315259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.983 [2024-11-06 13:47:11.315268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.984 [2024-11-06 13:47:11.315731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.984 [2024-11-06 13:47:11.315739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a7980 is same with the state(6) to be set 00:22:48.246 [2024-11-06 13:47:11.317512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:48.246 [2024-11-06 13:47:11.317540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:48.246 [2024-11-06 13:47:11.317549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:48.246 task offset: 24576 on job bdev=Nvme5n1 fails 00:22:48.246 00:22:48.246 Latency(us) 00:22:48.246 [2024-11-06T12:47:11.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.246 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.246 Job: Nvme1n1 ended in about 0.97 seconds with error 00:22:48.246 Verification LBA range: start 0x0 length 0x400 00:22:48.246 Nvme1n1 : 0.97 197.67 12.35 65.89 0.00 240079.79 14745.60 253405.87 00:22:48.246 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.246 Job: Nvme2n1 ended in about 0.97 seconds with error 00:22:48.246 Verification LBA range: start 0x0 length 0x400 00:22:48.246 Nvme2n1 : 0.97 197.17 12.32 65.72 0.00 235924.69 16274.77 290106.03 00:22:48.246 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.246 Job: Nvme3n1 ended in about 0.98 seconds with error 00:22:48.246 Verification LBA range: start 0x0 length 0x400 00:22:48.246 Nvme3n1 : 0.98 200.78 12.55 65.56 0.00 228101.41 35826.35 227191.47 00:22:48.246 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.246 Job: Nvme4n1 ended in about 0.98 seconds with error 00:22:48.246 Verification LBA range: start 0x0 length 0x400 00:22:48.246 Nvme4n1 : 0.98 200.28 12.52 65.40 0.00 223986.29 18896.21 235929.60 00:22:48.246 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.247 Job: Nvme5n1 ended in about 0.96 seconds with error 00:22:48.247 Verification LBA range: start 0x0 length 0x400 00:22:48.247 Nvme5n1 : 0.96 199.91 12.49 66.64 0.00 218096.64 19333.12 244667.73 00:22:48.247 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.247 Job: Nvme6n1 ended in about 0.98 seconds with error 00:22:48.247 Verification LBA range: start 0x0 length 0x400 00:22:48.247 Nvme6n1 : 0.98 130.47 8.15 65.23 0.00 291277.65 18240.85 251658.24 00:22:48.247 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.247 Job: Nvme7n1 ended in about 0.96 seconds with error 00:22:48.247 Verification LBA range: start 0x0 length 0x400 00:22:48.247 Nvme7n1 : 0.96 199.66 12.48 66.55 0.00 208679.68 16056.32 248162.99 00:22:48.247 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.247 Job: Nvme8n1 ended in about 0.96 seconds with error 00:22:48.247 Verification LBA range: start 0x0 length 0x400 00:22:48.247 Nvme8n1 : 0.96 200.46 12.53 66.47 0.00 203452.62 11632.64 222822.40 00:22:48.247 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.247 Job: Nvme9n1 ended in about 0.99 seconds with error 00:22:48.247 Verification LBA range: start 0x0 length 0x400 00:22:48.247 Nvme9n1 : 0.99 137.49 8.59 56.61 0.00 273896.96 18677.76 283115.52 00:22:48.247 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.247 Job: Nvme10n1 ended in about 0.98 seconds with error 00:22:48.247 Verification LBA range: start 0x0 length 0x400 00:22:48.247 Nvme10n1 : 0.98 130.15 8.13 65.07 0.00 266726.40 18786.99 262144.00 00:22:48.247 [2024-11-06T12:47:11.623Z] =================================================================================================================== 00:22:48.247 [2024-11-06T12:47:11.623Z] Total : 1794.02 112.13 649.15 0.00 235871.69 11632.64 290106.03 00:22:48.247 [2024-11-06 13:47:11.342652] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:48.247 [2024-11-06 13:47:11.342686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:48.247 [2024-11-06 13:47:11.343140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.247 [2024-11-06 13:47:11.343159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b8610 with addr=10.0.0.2, port=4420 00:22:48.247 [2024-11-06 13:47:11.343170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b8610 is same with the state(6) to be set 00:22:48.247 [2024-11-06 13:47:11.343496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.247 [2024-11-06 13:47:11.343507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1807730 with addr=10.0.0.2, port=4420 00:22:48.247 [2024-11-06 13:47:11.343514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1807730 is same with the state(6) to be set 00:22:48.247 [2024-11-06 13:47:11.343529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a0cb0 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.343540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13979f0 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.343550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139e810 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.343560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cc750 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.344009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.247 [2024-11-06 13:47:11.344024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17fa8b0 with addr=10.0.0.2, port=4420 00:22:48.247 [2024-11-06 13:47:11.344032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8b0 is same with the state(6) to be set 00:22:48.247 [2024-11-06 13:47:11.344376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.247 [2024-11-06 13:47:11.344388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139d420 with addr=10.0.0.2, port=4420 00:22:48.247 [2024-11-06 13:47:11.344396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139d420 is same with the state(6) to be set 00:22:48.247 [2024-11-06 13:47:11.344590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.247 [2024-11-06 13:47:11.344600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cc280 with addr=10.0.0.2, port=4420 00:22:48.247 [2024-11-06 13:47:11.344608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cc280 is same with the state(6) to be set 00:22:48.247 [2024-11-06 13:47:11.344891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.247 [2024-11-06 13:47:11.344902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17fab10 with addr=10.0.0.2, port=4420 00:22:48.247 [2024-11-06 13:47:11.344910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fab10 is same with the state(6) to be set 00:22:48.247 [2024-11-06 13:47:11.344919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b8610 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.344929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1807730 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.344938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:48.247 [2024-11-06 13:47:11.344945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:48.247 [2024-11-06 13:47:11.344957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:48.247 [2024-11-06 13:47:11.344967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:48.247 [2024-11-06 13:47:11.344976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:48.247 [2024-11-06 13:47:11.344983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:48.247 [2024-11-06 13:47:11.344990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:48.247 [2024-11-06 13:47:11.344997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:48.247 [2024-11-06 13:47:11.345005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:48.247 [2024-11-06 13:47:11.345012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:48.247 [2024-11-06 13:47:11.345019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:48.247 [2024-11-06 13:47:11.345025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:48.247 [2024-11-06 13:47:11.345033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:48.247 [2024-11-06 13:47:11.345039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:48.247 [2024-11-06 13:47:11.345046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:48.247 [2024-11-06 13:47:11.345053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:48.247 [2024-11-06 13:47:11.345112] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:48.247 [2024-11-06 13:47:11.345127] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:48.247 [2024-11-06 13:47:11.345501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa8b0 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.345514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139d420 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.345524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cc280 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.345534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fab10 (9): Bad file descriptor 00:22:48.247 [2024-11-06 13:47:11.345542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:48.247 [2024-11-06 13:47:11.345549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:48.247 [2024-11-06 13:47:11.345557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:48.247 [2024-11-06 13:47:11.345564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:48.247 [2024-11-06 13:47:11.345571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:48.247 [2024-11-06 13:47:11.345577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:48.247 [2024-11-06 13:47:11.345585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:48.247 [2024-11-06 13:47:11.345592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:48.247 [2024-11-06 13:47:11.345876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:48.247 [2024-11-06 13:47:11.345892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:48.247 [2024-11-06 13:47:11.345901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:48.247 [2024-11-06 13:47:11.345909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:48.247 [2024-11-06 13:47:11.345946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:48.247 [2024-11-06 13:47:11.345954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:48.247 [2024-11-06 13:47:11.345962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:48.247 [2024-11-06 13:47:11.345969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:48.247 [2024-11-06 13:47:11.345976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:48.247 [2024-11-06 13:47:11.345984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:48.247 [2024-11-06 13:47:11.345991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:48.247 [2024-11-06 13:47:11.345998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:48.247 [2024-11-06 13:47:11.346005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:48.247 [2024-11-06 13:47:11.346012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:48.247 [2024-11-06 13:47:11.346019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:48.247 [2024-11-06 13:47:11.346026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:48.248 [2024-11-06 13:47:11.346034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:48.248 [2024-11-06 13:47:11.346040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:48.248 [2024-11-06 13:47:11.346048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:48.248 [2024-11-06 13:47:11.346055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:48.248 [2024-11-06 13:47:11.346353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.248 [2024-11-06 13:47:11.346368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cc750 with addr=10.0.0.2, port=4420 00:22:48.248 [2024-11-06 13:47:11.346376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cc750 is same with the state(6) to be set 00:22:48.248 [2024-11-06 13:47:11.346431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.248 [2024-11-06 13:47:11.346441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139e810 with addr=10.0.0.2, port=4420 00:22:48.248 [2024-11-06 13:47:11.346449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e810 is same with the state(6) to be set 00:22:48.248 [2024-11-06 13:47:11.346598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.248 [2024-11-06 13:47:11.346609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13979f0 with addr=10.0.0.2, port=4420 00:22:48.248 [2024-11-06 13:47:11.346616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13979f0 is same with the state(6) to be set 00:22:48.248 [2024-11-06 13:47:11.346929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.248 [2024-11-06 13:47:11.346944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a0cb0 with addr=10.0.0.2, port=4420 00:22:48.248 [2024-11-06 13:47:11.346952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a0cb0 is same with the state(6) to be set 00:22:48.248 [2024-11-06 13:47:11.346981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cc750 (9): Bad file descriptor 00:22:48.248 [2024-11-06 13:47:11.346993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139e810 (9): Bad file descriptor 00:22:48.248 [2024-11-06 13:47:11.347004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13979f0 (9): Bad file descriptor 00:22:48.248 [2024-11-06 13:47:11.347013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a0cb0 (9): Bad file descriptor 00:22:48.248 [2024-11-06 13:47:11.347040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:48.248 [2024-11-06 13:47:11.347047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:48.248 [2024-11-06 13:47:11.347054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:48.248 [2024-11-06 13:47:11.347061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:48.248 [2024-11-06 13:47:11.347069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:48.248 [2024-11-06 13:47:11.347076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:48.248 [2024-11-06 13:47:11.347083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:48.248 [2024-11-06 13:47:11.347090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:48.248 [2024-11-06 13:47:11.347097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:48.248 [2024-11-06 13:47:11.347104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:48.248 [2024-11-06 13:47:11.347110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:48.248 [2024-11-06 13:47:11.347117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:48.248 [2024-11-06 13:47:11.347124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:48.248 [2024-11-06 13:47:11.347131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:48.248 [2024-11-06 13:47:11.347138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:48.248 [2024-11-06 13:47:11.347144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:48.248 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:49.189 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 709422 00:22:49.189 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:49.189 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 709422 00:22:49.189 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:49.189 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.189 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:49.189 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 709422 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.190 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.190 rmmod nvme_tcp 00:22:49.190 rmmod nvme_fabrics 00:22:49.450 rmmod nvme_keyring 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 709032 ']' 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 709032 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 709032 ']' 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 709032 00:22:49.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (709032) - No such process 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 709032 is not found' 00:22:49.450 Process with pid 709032 is not found 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.450 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.361 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.361 00:22:51.361 real 0m7.907s 00:22:51.361 user 0m19.702s 00:22:51.361 sys 0m1.217s 00:22:51.361 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:51.361 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.361 ************************************ 00:22:51.361 END TEST nvmf_shutdown_tc3 00:22:51.361 ************************************ 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:51.622 ************************************ 00:22:51.622 START TEST nvmf_shutdown_tc4 00:22:51.622 ************************************ 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.622 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:51.623 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:51.623 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:51.623 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:51.623 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.623 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:22:51.884 00:22:51.884 --- 10.0.0.2 ping statistics --- 00:22:51.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.884 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:22:51.884 00:22:51.884 --- 10.0.0.1 ping statistics --- 00:22:51.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.884 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=710876 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 710876 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 710876 ']' 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:51.884 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.884 [2024-11-06 13:47:15.237065] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:22:51.884 [2024-11-06 13:47:15.237133] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.144 [2024-11-06 13:47:15.333591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.144 [2024-11-06 13:47:15.367394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.144 [2024-11-06 13:47:15.367426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.144 [2024-11-06 13:47:15.367436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.144 [2024-11-06 13:47:15.367441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.144 [2024-11-06 13:47:15.367445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.144 [2024-11-06 13:47:15.368778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.144 [2024-11-06 13:47:15.369019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.144 [2024-11-06 13:47:15.369178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.144 [2024-11-06 13:47:15.369179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:52.715 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:52.715 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:22:52.715 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.715 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.715 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.715 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.715 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.715 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.715 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.976 [2024-11-06 13:47:16.091686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.976 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.976 Malloc1 00:22:52.976 [2024-11-06 13:47:16.206073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.976 Malloc2 00:22:52.976 Malloc3 00:22:52.976 Malloc4 00:22:52.976 Malloc5 00:22:53.236 Malloc6 00:22:53.236 Malloc7 00:22:53.236 Malloc8 00:22:53.236 Malloc9 00:22:53.236 Malloc10 00:22:53.236 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.236 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:53.236 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.236 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:53.236 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=711104 00:22:53.496 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:53.496 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:53.496 [2024-11-06 13:47:16.673491] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 710876 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 710876 ']' 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 710876 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 710876 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:58.799 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 710876' 00:22:58.799 killing process with pid 710876 00:22:58.800 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 710876 00:22:58.800 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 710876 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 [2024-11-06 13:47:21.692375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 [2024-11-06 13:47:21.692552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d2a90 is same with starting I/O failed: -6 00:22:58.800 the state(6) to be set 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 [2024-11-06 13:47:21.692588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d2a90 is same with the state(6) to be set 00:22:58.800 starting I/O failed: -6 00:22:58.800 [2024-11-06 13:47:21.692595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d2a90 is same with the state(6) to be set 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 [2024-11-06 13:47:21.693264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.800 starting I/O failed: -6 00:22:58.800 starting I/O failed: -6 00:22:58.800 starting I/O failed: -6 00:22:58.800 starting I/O failed: -6 00:22:58.800 starting I/O failed: -6 00:22:58.800 starting I/O failed: -6 00:22:58.800 starting I/O failed: -6 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 [2024-11-06 13:47:21.694438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.800 starting I/O failed: -6 00:22:58.800 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 [2024-11-06 13:47:21.695717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:58.801 NVMe io qpair process completion error 00:22:58.801 [2024-11-06 13:47:21.695886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff7f0 is same with the state(6) to be set 00:22:58.801 [2024-11-06 13:47:21.695912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff7f0 is same with the state(6) to be set 00:22:58.801 [2024-11-06 13:47:21.695922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff7f0 is same with the state(6) to be set 00:22:58.801 [2024-11-06 13:47:21.695928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff7f0 is same with the state(6) to be set 00:22:58.801 [2024-11-06 13:47:21.695933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff7f0 is same with the state(6) to be set 00:22:58.801 [2024-11-06 13:47:21.695938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff7f0 is same with the state(6) to be set 00:22:58.801 [2024-11-06 13:47:21.695943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff7f0 is same with the state(6) to be set 00:22:58.801 [2024-11-06 13:47:21.695948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff7f0 is same with the state(6) to be set 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 [2024-11-06 13:47:21.696181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffcc0 is same with Write completed with error (sct=0, sc=8) 00:22:58.801 the state(6) to be set 00:22:58.801 starting I/O failed: -6 00:22:58.801 [2024-11-06 13:47:21.696206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffcc0 is same with the state(6) to be set 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 [2024-11-06 13:47:21.696212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffcc0 is same with the state(6) to be set 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 [2024-11-06 13:47:21.696452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600190 is same with the state(6) to be set 00:22:58.801 starting I/O failed: -6 00:22:58.801 [2024-11-06 13:47:21.696474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600190 is same with the state(6) to be set 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 [2024-11-06 13:47:21.696481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600190 is same with the state(6) to be set 00:22:58.801 [2024-11-06 13:47:21.696486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600190 is same with the state(6) to be set 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 [2024-11-06 13:47:21.696775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 Write completed with error (sct=0, sc=8) 00:22:58.801 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 [2024-11-06 13:47:21.697611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 [2024-11-06 13:47:21.698516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.802 Write completed with error (sct=0, sc=8) 00:22:58.802 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 [2024-11-06 13:47:21.700052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.803 NVMe io qpair process completion error 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 [2024-11-06 13:47:21.701227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:58.803 starting I/O failed: -6 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 [2024-11-06 13:47:21.702143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 Write completed with error (sct=0, sc=8) 00:22:58.803 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 [2024-11-06 13:47:21.703055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 [2024-11-06 13:47:21.704740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.804 NVMe io qpair process completion error 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 starting I/O failed: -6 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.804 Write completed with error (sct=0, sc=8) 00:22:58.805 [2024-11-06 13:47:21.706000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.805 starting I/O failed: -6 00:22:58.805 starting I/O failed: -6 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 [2024-11-06 13:47:21.706969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 [2024-11-06 13:47:21.707890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.805 Write completed with error (sct=0, sc=8) 00:22:58.805 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 [2024-11-06 13:47:21.710204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.806 NVMe io qpair process completion error 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 [2024-11-06 13:47:21.711630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 [2024-11-06 13:47:21.712463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:58.806 starting I/O failed: -6 00:22:58.806 starting I/O failed: -6 00:22:58.806 starting I/O failed: -6 00:22:58.806 starting I/O failed: -6 00:22:58.806 starting I/O failed: -6 00:22:58.806 starting I/O failed: -6 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 Write completed with error (sct=0, sc=8) 00:22:58.806 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 [2024-11-06 13:47:21.713846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 [2024-11-06 13:47:21.715466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.807 NVMe io qpair process completion error 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.807 Write completed with error (sct=0, sc=8) 00:22:58.807 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 [2024-11-06 13:47:21.717631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 [2024-11-06 13:47:21.718530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 [2024-11-06 13:47:21.720801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.808 NVMe io qpair process completion error 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 starting I/O failed: -6 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.808 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 [2024-11-06 13:47:21.722019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 [2024-11-06 13:47:21.722842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.809 Write completed with error (sct=0, sc=8) 00:22:58.809 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 [2024-11-06 13:47:21.723769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 [2024-11-06 13:47:21.725430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.810 NVMe io qpair process completion error 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 [2024-11-06 13:47:21.726698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.810 Write completed with error (sct=0, sc=8) 00:22:58.810 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 [2024-11-06 13:47:21.727511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.811 starting I/O failed: -6 00:22:58.811 starting I/O failed: -6 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 [2024-11-06 13:47:21.728648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.811 Write completed with error (sct=0, sc=8) 00:22:58.811 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 [2024-11-06 13:47:21.731448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.812 NVMe io qpair process completion error 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 [2024-11-06 13:47:21.732812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 [2024-11-06 13:47:21.733733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 [2024-11-06 13:47:21.734664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.812 starting I/O failed: -6 00:22:58.812 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 [2024-11-06 13:47:21.736513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.813 NVMe io qpair process completion error 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 [2024-11-06 13:47:21.737686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.813 starting I/O failed: -6 00:22:58.813 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 [2024-11-06 13:47:21.738498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 [2024-11-06 13:47:21.739438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.814 Write completed with error (sct=0, sc=8) 00:22:58.814 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 Write completed with error (sct=0, sc=8) 00:22:58.815 starting I/O failed: -6 00:22:58.815 [2024-11-06 13:47:21.741130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:58.815 NVMe io qpair process completion error 00:22:58.815 Initializing NVMe Controllers 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:58.815 Controller IO queue size 128, less than required. 00:22:58.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:58.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:58.815 Initialization complete. Launching workers. 00:22:58.815 ======================================================== 00:22:58.815 Latency(us) 00:22:58.815 Device Information : IOPS MiB/s Average min max 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1911.44 82.13 66982.06 828.15 134115.69 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1875.42 80.58 67573.85 606.31 151315.42 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1841.94 79.15 68818.04 784.13 119737.06 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1877.54 80.68 67534.87 553.65 117752.69 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1850.84 79.53 68533.70 709.50 119063.78 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1884.53 80.98 67349.33 707.35 117670.84 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1879.02 80.74 67569.76 534.71 124683.82 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1898.72 81.59 66899.64 697.91 126729.72 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1915.04 82.29 66356.02 767.79 119057.89 00:22:58.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1849.78 79.48 68739.99 615.95 119671.56 00:22:58.815 ======================================================== 00:22:58.815 Total : 18784.27 807.14 67625.91 534.71 151315.42 00:22:58.815 00:22:58.815 [2024-11-06 13:47:21.747393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x845740 is same with the state(6) to be set 00:22:58.815 [2024-11-06 13:47:21.747437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x846ae0 is same with the state(6) to be set 00:22:58.815 [2024-11-06 13:47:21.747467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x846900 is same with the state(6) to be set 00:22:58.815 [2024-11-06 13:47:21.747496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x844560 is same with the state(6) to be set 00:22:58.815 [2024-11-06 13:47:21.747526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x844bc0 is same with the state(6) to be set 00:22:58.815 [2024-11-06 13:47:21.747559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x846720 is same with the state(6) to be set 00:22:58.815 [2024-11-06 13:47:21.747587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x845410 is same with the state(6) to be set 00:22:58.815 [2024-11-06 13:47:21.747615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x844ef0 is same with the state(6) to be set 00:22:58.815 [2024-11-06 13:47:21.747643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x844890 is same with the state(6) to be set 00:22:58.815 [2024-11-06 13:47:21.747670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x845a70 is same with the state(6) to be set 00:22:58.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:58.815 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 711104 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 711104 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 711104 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.756 rmmod nvme_tcp 00:22:59.756 rmmod nvme_fabrics 00:22:59.756 rmmod nvme_keyring 00:22:59.756 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.756 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:59.756 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:59.756 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 710876 ']' 00:22:59.756 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 710876 00:22:59.756 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 710876 ']' 00:22:59.756 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 710876 00:22:59.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (710876) - No such process 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 710876 is not found' 00:22:59.757 Process with pid 710876 is not found 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.757 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.300 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.301 00:23:02.301 real 0m10.310s 00:23:02.301 user 0m28.044s 00:23:02.301 sys 0m3.997s 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:02.301 ************************************ 00:23:02.301 END TEST nvmf_shutdown_tc4 00:23:02.301 ************************************ 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:02.301 00:23:02.301 real 0m43.646s 00:23:02.301 user 1m47.494s 00:23:02.301 sys 0m13.521s 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:02.301 ************************************ 00:23:02.301 END TEST nvmf_shutdown 00:23:02.301 ************************************ 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:02.301 ************************************ 00:23:02.301 START TEST nvmf_nsid 00:23:02.301 ************************************ 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:02.301 * Looking for test storage... 00:23:02.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.301 --rc genhtml_branch_coverage=1 00:23:02.301 --rc genhtml_function_coverage=1 00:23:02.301 --rc genhtml_legend=1 00:23:02.301 --rc geninfo_all_blocks=1 00:23:02.301 --rc geninfo_unexecuted_blocks=1 00:23:02.301 00:23:02.301 ' 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.301 --rc genhtml_branch_coverage=1 00:23:02.301 --rc genhtml_function_coverage=1 00:23:02.301 --rc genhtml_legend=1 00:23:02.301 --rc geninfo_all_blocks=1 00:23:02.301 --rc geninfo_unexecuted_blocks=1 00:23:02.301 00:23:02.301 ' 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.301 --rc genhtml_branch_coverage=1 00:23:02.301 --rc genhtml_function_coverage=1 00:23:02.301 --rc genhtml_legend=1 00:23:02.301 --rc geninfo_all_blocks=1 00:23:02.301 --rc geninfo_unexecuted_blocks=1 00:23:02.301 00:23:02.301 ' 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:02.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.301 --rc genhtml_branch_coverage=1 00:23:02.301 --rc genhtml_function_coverage=1 00:23:02.301 --rc genhtml_legend=1 00:23:02.301 --rc geninfo_all_blocks=1 00:23:02.301 --rc geninfo_unexecuted_blocks=1 00:23:02.301 00:23:02.301 ' 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.301 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.302 13:47:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:10.440 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:10.440 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:10.440 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:10.440 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:10.440 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:23:10.441 00:23:10.441 --- 10.0.0.2 ping statistics --- 00:23:10.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.441 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:23:10.441 00:23:10.441 --- 10.0.0.1 ping statistics --- 00:23:10.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.441 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=716540 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 716540 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 716540 ']' 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:10.441 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:10.441 [2024-11-06 13:47:32.876451] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:23:10.441 [2024-11-06 13:47:32.876523] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.441 [2024-11-06 13:47:32.960111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.441 [2024-11-06 13:47:33.001002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.441 [2024-11-06 13:47:33.001034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.441 [2024-11-06 13:47:33.001042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.441 [2024-11-06 13:47:33.001049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.441 [2024-11-06 13:47:33.001055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.441 [2024-11-06 13:47:33.001671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=716643 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=793fcd3f-7c71-4930-b7b5-0b83451a4d25 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=38753853-4a91-4382-836d-2d45212bd468 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ec006002-7ce0-4f66-a2aa-67382fc243ab 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.441 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:10.441 null0 00:23:10.441 null1 00:23:10.441 null2 00:23:10.441 [2024-11-06 13:47:33.768846] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:23:10.441 [2024-11-06 13:47:33.768898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716643 ] 00:23:10.441 [2024-11-06 13:47:33.771464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.441 [2024-11-06 13:47:33.795657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.701 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.701 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 716643 /var/tmp/tgt2.sock 00:23:10.701 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 716643 ']' 00:23:10.701 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:10.701 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:10.702 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:10.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:10.702 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:10.702 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:10.702 [2024-11-06 13:47:33.858161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.702 [2024-11-06 13:47:33.894098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.962 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:10.962 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:10.962 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:11.221 [2024-11-06 13:47:34.382720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.221 [2024-11-06 13:47:34.398847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:11.221 nvme0n1 nvme0n2 00:23:11.221 nvme1n1 00:23:11.221 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:11.221 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:11.221 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:12.603 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:13.544 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:13.544 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:13.544 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:13.544 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:13.544 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:13.544 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 793fcd3f-7c71-4930-b7b5-0b83451a4d25 00:23:13.544 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:13.544 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:13.544 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=793fcd3f7c714930b7b50b83451a4d25 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 793FCD3F7C714930B7B50B83451A4D25 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 793FCD3F7C714930B7B50B83451A4D25 == \7\9\3\F\C\D\3\F\7\C\7\1\4\9\3\0\B\7\B\5\0\B\8\3\4\5\1\A\4\D\2\5 ]] 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 38753853-4a91-4382-836d-2d45212bd468 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:13.805 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=387538534a914382836d2d45212bd468 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 387538534A914382836D2D45212BD468 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 387538534A914382836D2D45212BD468 == \3\8\7\5\3\8\5\3\4\A\9\1\4\3\8\2\8\3\6\D\2\D\4\5\2\1\2\B\D\4\6\8 ]] 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ec006002-7ce0-4f66-a2aa-67382fc243ab 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ec0060027ce04f66a2aa67382fc243ab 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EC0060027CE04F66A2AA67382FC243AB 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ EC0060027CE04F66A2AA67382FC243AB == \E\C\0\0\6\0\0\2\7\C\E\0\4\F\6\6\A\2\A\A\6\7\3\8\2\F\C\2\4\3\A\B ]] 00:23:13.805 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 716643 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 716643 ']' 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 716643 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 716643 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 716643' 00:23:14.067 killing process with pid 716643 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 716643 00:23:14.067 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 716643 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.327 rmmod nvme_tcp 00:23:14.327 rmmod nvme_fabrics 00:23:14.327 rmmod nvme_keyring 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 716540 ']' 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 716540 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 716540 ']' 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 716540 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:14.327 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 716540 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 716540' 00:23:14.587 killing process with pid 716540 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 716540 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 716540 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.587 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.136 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.136 00:23:17.136 real 0m14.713s 00:23:17.136 user 0m11.220s 00:23:17.136 sys 0m6.705s 00:23:17.136 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:17.136 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:17.136 ************************************ 00:23:17.136 END TEST nvmf_nsid 00:23:17.136 ************************************ 00:23:17.136 13:47:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:17.136 00:23:17.136 real 13m6.285s 00:23:17.136 user 27m35.380s 00:23:17.136 sys 3m52.565s 00:23:17.136 13:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:17.136 13:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:17.136 ************************************ 00:23:17.136 END TEST nvmf_target_extra 00:23:17.136 ************************************ 00:23:17.136 13:47:40 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:17.136 13:47:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:17.136 13:47:40 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:17.136 13:47:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:17.136 ************************************ 00:23:17.136 START TEST nvmf_host 00:23:17.136 ************************************ 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:17.136 * Looking for test storage... 00:23:17.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.136 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:17.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.137 --rc genhtml_branch_coverage=1 00:23:17.137 --rc genhtml_function_coverage=1 00:23:17.137 --rc genhtml_legend=1 00:23:17.137 --rc geninfo_all_blocks=1 00:23:17.137 --rc geninfo_unexecuted_blocks=1 00:23:17.137 00:23:17.137 ' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:17.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.137 --rc genhtml_branch_coverage=1 00:23:17.137 --rc genhtml_function_coverage=1 00:23:17.137 --rc genhtml_legend=1 00:23:17.137 --rc geninfo_all_blocks=1 00:23:17.137 --rc geninfo_unexecuted_blocks=1 00:23:17.137 00:23:17.137 ' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:17.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.137 --rc genhtml_branch_coverage=1 00:23:17.137 --rc genhtml_function_coverage=1 00:23:17.137 --rc genhtml_legend=1 00:23:17.137 --rc geninfo_all_blocks=1 00:23:17.137 --rc geninfo_unexecuted_blocks=1 00:23:17.137 00:23:17.137 ' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:17.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.137 --rc genhtml_branch_coverage=1 00:23:17.137 --rc genhtml_function_coverage=1 00:23:17.137 --rc genhtml_legend=1 00:23:17.137 --rc geninfo_all_blocks=1 00:23:17.137 --rc geninfo_unexecuted_blocks=1 00:23:17.137 00:23:17.137 ' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.137 ************************************ 00:23:17.137 START TEST nvmf_multicontroller 00:23:17.137 ************************************ 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:17.137 * Looking for test storage... 00:23:17.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.137 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:17.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.399 --rc genhtml_branch_coverage=1 00:23:17.399 --rc genhtml_function_coverage=1 00:23:17.399 --rc genhtml_legend=1 00:23:17.399 --rc geninfo_all_blocks=1 00:23:17.399 --rc geninfo_unexecuted_blocks=1 00:23:17.399 00:23:17.399 ' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:17.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.399 --rc genhtml_branch_coverage=1 00:23:17.399 --rc genhtml_function_coverage=1 00:23:17.399 --rc genhtml_legend=1 00:23:17.399 --rc geninfo_all_blocks=1 00:23:17.399 --rc geninfo_unexecuted_blocks=1 00:23:17.399 00:23:17.399 ' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:17.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.399 --rc genhtml_branch_coverage=1 00:23:17.399 --rc genhtml_function_coverage=1 00:23:17.399 --rc genhtml_legend=1 00:23:17.399 --rc geninfo_all_blocks=1 00:23:17.399 --rc geninfo_unexecuted_blocks=1 00:23:17.399 00:23:17.399 ' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:17.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.399 --rc genhtml_branch_coverage=1 00:23:17.399 --rc genhtml_function_coverage=1 00:23:17.399 --rc genhtml_legend=1 00:23:17.399 --rc geninfo_all_blocks=1 00:23:17.399 --rc geninfo_unexecuted_blocks=1 00:23:17.399 00:23:17.399 ' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.399 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.400 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.400 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.400 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.400 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.400 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.400 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.400 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.400 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.400 13:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:25.539 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:25.539 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:25.539 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:25.539 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.539 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:23:25.540 00:23:25.540 --- 10.0.0.2 ping statistics --- 00:23:25.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.540 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:23:25.540 00:23:25.540 --- 10.0.0.1 ping statistics --- 00:23:25.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.540 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.540 13:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=721746 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 721746 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 721746 ']' 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:25.540 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.540 [2024-11-06 13:47:48.099565] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:23:25.540 [2024-11-06 13:47:48.099633] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.540 [2024-11-06 13:47:48.199932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:25.540 [2024-11-06 13:47:48.252419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.540 [2024-11-06 13:47:48.252470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.540 [2024-11-06 13:47:48.252479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.540 [2024-11-06 13:47:48.252486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.540 [2024-11-06 13:47:48.252493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.540 [2024-11-06 13:47:48.254300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.540 [2024-11-06 13:47:48.254468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.540 [2024-11-06 13:47:48.254469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 [2024-11-06 13:47:48.966257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 Malloc0 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 [2024-11-06 13:47:49.034011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 [2024-11-06 13:47:49.045956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 Malloc1 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=722095 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 722095 /var/tmp/bdevperf.sock 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 722095 ']' 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:25.834 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.901 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:26.901 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:26.901 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.901 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.901 13:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.901 NVMe0n1 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.901 1 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.901 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.902 request: 00:23:26.902 { 00:23:26.902 "name": "NVMe0", 00:23:26.902 "trtype": "tcp", 00:23:26.902 "traddr": "10.0.0.2", 00:23:26.902 "adrfam": "ipv4", 00:23:26.902 "trsvcid": "4420", 00:23:26.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.902 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:26.902 "hostaddr": "10.0.0.1", 00:23:26.902 "prchk_reftag": false, 00:23:26.902 "prchk_guard": false, 00:23:26.902 "hdgst": false, 00:23:26.902 "ddgst": false, 00:23:26.902 "allow_unrecognized_csi": false, 00:23:26.902 "method": "bdev_nvme_attach_controller", 00:23:26.902 "req_id": 1 00:23:26.902 } 00:23:26.902 Got JSON-RPC error response 00:23:26.902 response: 00:23:26.902 { 00:23:26.902 "code": -114, 00:23:26.902 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.902 } 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.902 request: 00:23:26.902 { 00:23:26.902 "name": "NVMe0", 00:23:26.902 "trtype": "tcp", 00:23:26.902 "traddr": "10.0.0.2", 00:23:26.902 "adrfam": "ipv4", 00:23:26.902 "trsvcid": "4420", 00:23:26.902 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.902 "hostaddr": "10.0.0.1", 00:23:26.902 "prchk_reftag": false, 00:23:26.902 "prchk_guard": false, 00:23:26.902 "hdgst": false, 00:23:26.902 "ddgst": false, 00:23:26.902 "allow_unrecognized_csi": false, 00:23:26.902 "method": "bdev_nvme_attach_controller", 00:23:26.902 "req_id": 1 00:23:26.902 } 00:23:26.902 Got JSON-RPC error response 00:23:26.902 response: 00:23:26.902 { 00:23:26.902 "code": -114, 00:23:26.902 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.902 } 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.902 request: 00:23:26.902 { 00:23:26.902 "name": "NVMe0", 00:23:26.902 "trtype": "tcp", 00:23:26.902 "traddr": "10.0.0.2", 00:23:26.902 "adrfam": "ipv4", 00:23:26.902 "trsvcid": "4420", 00:23:26.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.902 "hostaddr": "10.0.0.1", 00:23:26.902 "prchk_reftag": false, 00:23:26.902 "prchk_guard": false, 00:23:26.902 "hdgst": false, 00:23:26.902 "ddgst": false, 00:23:26.902 "multipath": "disable", 00:23:26.902 "allow_unrecognized_csi": false, 00:23:26.902 "method": "bdev_nvme_attach_controller", 00:23:26.902 "req_id": 1 00:23:26.902 } 00:23:26.902 Got JSON-RPC error response 00:23:26.902 response: 00:23:26.902 { 00:23:26.902 "code": -114, 00:23:26.902 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:26.902 } 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.902 request: 00:23:26.902 { 00:23:26.902 "name": "NVMe0", 00:23:26.902 "trtype": "tcp", 00:23:26.902 "traddr": "10.0.0.2", 00:23:26.902 "adrfam": "ipv4", 00:23:26.902 "trsvcid": "4420", 00:23:26.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.902 "hostaddr": "10.0.0.1", 00:23:26.902 "prchk_reftag": false, 00:23:26.902 "prchk_guard": false, 00:23:26.902 "hdgst": false, 00:23:26.902 "ddgst": false, 00:23:26.902 "multipath": "failover", 00:23:26.902 "allow_unrecognized_csi": false, 00:23:26.902 "method": "bdev_nvme_attach_controller", 00:23:26.902 "req_id": 1 00:23:26.902 } 00:23:26.902 Got JSON-RPC error response 00:23:26.902 response: 00:23:26.902 { 00:23:26.902 "code": -114, 00:23:26.902 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.902 } 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.902 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.168 NVMe0n1 00:23:27.168 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.168 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:27.168 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.168 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.168 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.168 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:27.168 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.168 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.428 00:23:27.428 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.428 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.428 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:27.428 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.428 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.428 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.428 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:27.428 13:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:28.369 { 00:23:28.369 "results": [ 00:23:28.369 { 00:23:28.369 "job": "NVMe0n1", 00:23:28.369 "core_mask": "0x1", 00:23:28.369 "workload": "write", 00:23:28.369 "status": "finished", 00:23:28.369 "queue_depth": 128, 00:23:28.369 "io_size": 4096, 00:23:28.369 "runtime": 1.007358, 00:23:28.369 "iops": 28768.322681707992, 00:23:28.369 "mibps": 112.37626047542184, 00:23:28.369 "io_failed": 0, 00:23:28.369 "io_timeout": 0, 00:23:28.369 "avg_latency_us": 4439.647285944329, 00:23:28.369 "min_latency_us": 2143.5733333333333, 00:23:28.369 "max_latency_us": 15510.186666666666 00:23:28.369 } 00:23:28.369 ], 00:23:28.369 "core_count": 1 00:23:28.369 } 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 722095 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 722095 ']' 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 722095 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:28.369 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 722095 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 722095' 00:23:28.630 killing process with pid 722095 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 722095 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 722095 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:28.630 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:28.630 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:28.630 [2024-11-06 13:47:49.167753] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:23:28.630 [2024-11-06 13:47:49.167812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722095 ] 00:23:28.630 [2024-11-06 13:47:49.238653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.630 [2024-11-06 13:47:49.274665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.630 [2024-11-06 13:47:50.550280] bdev.c:4687:bdev_name_add: *ERROR*: Bdev name f29eb5bf-0257-4a71-b5be-657b8f064bc7 already exists 00:23:28.630 [2024-11-06 13:47:50.550312] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:f29eb5bf-0257-4a71-b5be-657b8f064bc7 alias for bdev NVMe1n1 00:23:28.630 [2024-11-06 13:47:50.550322] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:28.630 Running I/O for 1 seconds... 00:23:28.630 28771.00 IOPS, 112.39 MiB/s 00:23:28.630 Latency(us) 00:23:28.630 [2024-11-06T12:47:52.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.630 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:28.630 NVMe0n1 : 1.01 28768.32 112.38 0.00 0.00 4439.65 2143.57 15510.19 00:23:28.630 [2024-11-06T12:47:52.006Z] =================================================================================================================== 00:23:28.630 [2024-11-06T12:47:52.006Z] Total : 28768.32 112.38 0.00 0.00 4439.65 2143.57 15510.19 00:23:28.630 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.630 00:23:28.630 Latency(us) 00:23:28.630 [2024-11-06T12:47:52.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.630 [2024-11-06T12:47:52.007Z] =================================================================================================================== 00:23:28.631 [2024-11-06T12:47:52.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.631 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:28.631 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.631 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:28.631 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:28.631 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.631 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:28.631 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.631 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:28.631 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.631 13:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.631 rmmod nvme_tcp 00:23:28.631 rmmod nvme_fabrics 00:23:28.631 rmmod nvme_keyring 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 721746 ']' 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 721746 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 721746 ']' 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 721746 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 721746 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 721746' 00:23:28.891 killing process with pid 721746 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 721746 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 721746 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.891 13:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.435 00:23:31.435 real 0m13.970s 00:23:31.435 user 0m17.215s 00:23:31.435 sys 0m6.423s 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.435 ************************************ 00:23:31.435 END TEST nvmf_multicontroller 00:23:31.435 ************************************ 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.435 ************************************ 00:23:31.435 START TEST nvmf_aer 00:23:31.435 ************************************ 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:31.435 * Looking for test storage... 00:23:31.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:31.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.435 --rc genhtml_branch_coverage=1 00:23:31.435 --rc genhtml_function_coverage=1 00:23:31.435 --rc genhtml_legend=1 00:23:31.435 --rc geninfo_all_blocks=1 00:23:31.435 --rc geninfo_unexecuted_blocks=1 00:23:31.435 00:23:31.435 ' 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:31.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.435 --rc genhtml_branch_coverage=1 00:23:31.435 --rc genhtml_function_coverage=1 00:23:31.435 --rc genhtml_legend=1 00:23:31.435 --rc geninfo_all_blocks=1 00:23:31.435 --rc geninfo_unexecuted_blocks=1 00:23:31.435 00:23:31.435 ' 00:23:31.435 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:31.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.435 --rc genhtml_branch_coverage=1 00:23:31.435 --rc genhtml_function_coverage=1 00:23:31.435 --rc genhtml_legend=1 00:23:31.435 --rc geninfo_all_blocks=1 00:23:31.435 --rc geninfo_unexecuted_blocks=1 00:23:31.435 00:23:31.435 ' 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:31.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.436 --rc genhtml_branch_coverage=1 00:23:31.436 --rc genhtml_function_coverage=1 00:23:31.436 --rc genhtml_legend=1 00:23:31.436 --rc geninfo_all_blocks=1 00:23:31.436 --rc geninfo_unexecuted_blocks=1 00:23:31.436 00:23:31.436 ' 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.436 13:47:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:39.578 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:39.578 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:39.578 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.578 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:39.579 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:23:39.579 00:23:39.579 --- 10.0.0.2 ping statistics --- 00:23:39.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.579 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:23:39.579 00:23:39.579 --- 10.0.0.1 ping statistics --- 00:23:39.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.579 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.579 13:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=726887 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 726887 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 726887 ']' 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.579 [2024-11-06 13:48:02.063033] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:23:39.579 [2024-11-06 13:48:02.063106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.579 [2024-11-06 13:48:02.145854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.579 [2024-11-06 13:48:02.189040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.579 [2024-11-06 13:48:02.189076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.579 [2024-11-06 13:48:02.189084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.579 [2024-11-06 13:48:02.189090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.579 [2024-11-06 13:48:02.189096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.579 [2024-11-06 13:48:02.190814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.579 [2024-11-06 13:48:02.190946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.579 [2024-11-06 13:48:02.191106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.579 [2024-11-06 13:48:02.191106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.579 [2024-11-06 13:48:02.918360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.579 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.839 Malloc0 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.839 [2024-11-06 13:48:02.991122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.839 13:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.839 [ 00:23:39.839 { 00:23:39.839 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:39.839 "subtype": "Discovery", 00:23:39.839 "listen_addresses": [], 00:23:39.839 "allow_any_host": true, 00:23:39.839 "hosts": [] 00:23:39.839 }, 00:23:39.839 { 00:23:39.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.839 "subtype": "NVMe", 00:23:39.839 "listen_addresses": [ 00:23:39.839 { 00:23:39.839 "trtype": "TCP", 00:23:39.839 "adrfam": "IPv4", 00:23:39.839 "traddr": "10.0.0.2", 00:23:39.839 "trsvcid": "4420" 00:23:39.839 } 00:23:39.839 ], 00:23:39.839 "allow_any_host": true, 00:23:39.839 "hosts": [], 00:23:39.839 "serial_number": "SPDK00000000000001", 00:23:39.839 "model_number": "SPDK bdev Controller", 00:23:39.839 "max_namespaces": 2, 00:23:39.839 "min_cntlid": 1, 00:23:39.839 "max_cntlid": 65519, 00:23:39.839 "namespaces": [ 00:23:39.839 { 00:23:39.839 "nsid": 1, 00:23:39.839 "bdev_name": "Malloc0", 00:23:39.839 "name": "Malloc0", 00:23:39.839 "nguid": "F2C44E6B5D8E4F42AEAA19B62C8679E9", 00:23:39.839 "uuid": "f2c44e6b-5d8e-4f42-aeaa-19b62c8679e9" 00:23:39.839 } 00:23:39.839 ] 00:23:39.839 } 00:23:39.839 ] 00:23:39.839 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.839 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:39.839 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:39.839 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=727117 00:23:39.839 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:23:39.840 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.100 Malloc1 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.100 Asynchronous Event Request test 00:23:40.100 Attaching to 10.0.0.2 00:23:40.100 Attached to 10.0.0.2 00:23:40.100 Registering asynchronous event callbacks... 00:23:40.100 Starting namespace attribute notice tests for all controllers... 00:23:40.100 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:40.100 aer_cb - Changed Namespace 00:23:40.100 Cleaning up... 00:23:40.100 [ 00:23:40.100 { 00:23:40.100 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:40.100 "subtype": "Discovery", 00:23:40.100 "listen_addresses": [], 00:23:40.100 "allow_any_host": true, 00:23:40.100 "hosts": [] 00:23:40.100 }, 00:23:40.100 { 00:23:40.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.100 "subtype": "NVMe", 00:23:40.100 "listen_addresses": [ 00:23:40.100 { 00:23:40.100 "trtype": "TCP", 00:23:40.100 "adrfam": "IPv4", 00:23:40.100 "traddr": "10.0.0.2", 00:23:40.100 "trsvcid": "4420" 00:23:40.100 } 00:23:40.100 ], 00:23:40.100 "allow_any_host": true, 00:23:40.100 "hosts": [], 00:23:40.100 "serial_number": "SPDK00000000000001", 00:23:40.100 "model_number": "SPDK bdev Controller", 00:23:40.100 "max_namespaces": 2, 00:23:40.100 "min_cntlid": 1, 00:23:40.100 "max_cntlid": 65519, 00:23:40.100 "namespaces": [ 00:23:40.100 { 00:23:40.100 "nsid": 1, 00:23:40.100 "bdev_name": "Malloc0", 00:23:40.100 "name": "Malloc0", 00:23:40.100 "nguid": "F2C44E6B5D8E4F42AEAA19B62C8679E9", 00:23:40.100 "uuid": "f2c44e6b-5d8e-4f42-aeaa-19b62c8679e9" 00:23:40.100 }, 00:23:40.100 { 00:23:40.100 "nsid": 2, 00:23:40.100 "bdev_name": "Malloc1", 00:23:40.100 "name": "Malloc1", 00:23:40.100 "nguid": "4CDB4EF4471D4DD89E6931862FB1684A", 00:23:40.100 "uuid": "4cdb4ef4-471d-4dd8-9e69-31862fb1684a" 00:23:40.100 } 00:23:40.100 ] 00:23:40.100 } 00:23:40.100 ] 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 727117 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.100 rmmod nvme_tcp 00:23:40.100 rmmod nvme_fabrics 00:23:40.100 rmmod nvme_keyring 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 726887 ']' 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 726887 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 726887 ']' 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 726887 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:40.100 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 726887 00:23:40.360 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:40.360 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:40.360 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 726887' 00:23:40.360 killing process with pid 726887 00:23:40.360 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 726887 00:23:40.360 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 726887 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.361 13:48:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.905 00:23:42.905 real 0m11.327s 00:23:42.905 user 0m8.014s 00:23:42.905 sys 0m5.979s 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.905 ************************************ 00:23:42.905 END TEST nvmf_aer 00:23:42.905 ************************************ 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.905 ************************************ 00:23:42.905 START TEST nvmf_async_init 00:23:42.905 ************************************ 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:42.905 * Looking for test storage... 00:23:42.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:42.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.905 --rc genhtml_branch_coverage=1 00:23:42.905 --rc genhtml_function_coverage=1 00:23:42.905 --rc genhtml_legend=1 00:23:42.905 --rc geninfo_all_blocks=1 00:23:42.905 --rc geninfo_unexecuted_blocks=1 00:23:42.905 00:23:42.905 ' 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:42.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.905 --rc genhtml_branch_coverage=1 00:23:42.905 --rc genhtml_function_coverage=1 00:23:42.905 --rc genhtml_legend=1 00:23:42.905 --rc geninfo_all_blocks=1 00:23:42.905 --rc geninfo_unexecuted_blocks=1 00:23:42.905 00:23:42.905 ' 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:42.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.905 --rc genhtml_branch_coverage=1 00:23:42.905 --rc genhtml_function_coverage=1 00:23:42.905 --rc genhtml_legend=1 00:23:42.905 --rc geninfo_all_blocks=1 00:23:42.905 --rc geninfo_unexecuted_blocks=1 00:23:42.905 00:23:42.905 ' 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:42.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.905 --rc genhtml_branch_coverage=1 00:23:42.905 --rc genhtml_function_coverage=1 00:23:42.905 --rc genhtml_legend=1 00:23:42.905 --rc geninfo_all_blocks=1 00:23:42.905 --rc geninfo_unexecuted_blocks=1 00:23:42.905 00:23:42.905 ' 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.905 13:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.905 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a5c1758cdd0340c3bf24c11a1f6a656c 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.906 13:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:51.046 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:51.046 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.046 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:51.047 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:51.047 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:23:51.047 00:23:51.047 --- 10.0.0.2 ping statistics --- 00:23:51.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.047 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:23:51.047 00:23:51.047 --- 10.0.0.1 ping statistics --- 00:23:51.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.047 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=731795 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 731795 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 731795 ']' 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:51.047 13:48:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.047 [2024-11-06 13:48:13.692060] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:23:51.047 [2024-11-06 13:48:13.692127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.047 [2024-11-06 13:48:13.776571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.047 [2024-11-06 13:48:13.817625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.047 [2024-11-06 13:48:13.817665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.047 [2024-11-06 13:48:13.817673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.047 [2024-11-06 13:48:13.817680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.047 [2024-11-06 13:48:13.817685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.047 [2024-11-06 13:48:13.818283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.308 [2024-11-06 13:48:14.551624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.308 null0 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a5c1758cdd0340c3bf24c11a1f6a656c 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.308 [2024-11-06 13:48:14.591865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.308 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.569 nvme0n1 00:23:51.569 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.569 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.569 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.569 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.569 [ 00:23:51.569 { 00:23:51.569 "name": "nvme0n1", 00:23:51.569 "aliases": [ 00:23:51.569 "a5c1758c-dd03-40c3-bf24-c11a1f6a656c" 00:23:51.569 ], 00:23:51.569 "product_name": "NVMe disk", 00:23:51.569 "block_size": 512, 00:23:51.569 "num_blocks": 2097152, 00:23:51.569 "uuid": "a5c1758c-dd03-40c3-bf24-c11a1f6a656c", 00:23:51.569 "numa_id": 0, 00:23:51.569 "assigned_rate_limits": { 00:23:51.569 "rw_ios_per_sec": 0, 00:23:51.569 "rw_mbytes_per_sec": 0, 00:23:51.569 "r_mbytes_per_sec": 0, 00:23:51.569 "w_mbytes_per_sec": 0 00:23:51.569 }, 00:23:51.569 "claimed": false, 00:23:51.569 "zoned": false, 00:23:51.569 "supported_io_types": { 00:23:51.569 "read": true, 00:23:51.569 "write": true, 00:23:51.569 "unmap": false, 00:23:51.569 "flush": true, 00:23:51.569 "reset": true, 00:23:51.569 "nvme_admin": true, 00:23:51.569 "nvme_io": true, 00:23:51.569 "nvme_io_md": false, 00:23:51.569 "write_zeroes": true, 00:23:51.569 "zcopy": false, 00:23:51.569 "get_zone_info": false, 00:23:51.569 "zone_management": false, 00:23:51.569 "zone_append": false, 00:23:51.569 "compare": true, 00:23:51.569 "compare_and_write": true, 00:23:51.569 "abort": true, 00:23:51.569 "seek_hole": false, 00:23:51.569 "seek_data": false, 00:23:51.569 "copy": true, 00:23:51.569 "nvme_iov_md": false 00:23:51.569 }, 00:23:51.569 "memory_domains": [ 00:23:51.569 { 00:23:51.569 "dma_device_id": "system", 00:23:51.569 "dma_device_type": 1 00:23:51.569 } 00:23:51.569 ], 00:23:51.569 "driver_specific": { 00:23:51.569 "nvme": [ 00:23:51.569 { 00:23:51.569 "trid": { 00:23:51.569 "trtype": "TCP", 00:23:51.569 "adrfam": "IPv4", 00:23:51.569 "traddr": "10.0.0.2", 00:23:51.569 "trsvcid": "4420", 00:23:51.569 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.569 }, 00:23:51.569 "ctrlr_data": { 00:23:51.569 "cntlid": 1, 00:23:51.569 "vendor_id": "0x8086", 00:23:51.569 "model_number": "SPDK bdev Controller", 00:23:51.569 "serial_number": "00000000000000000000", 00:23:51.569 "firmware_revision": "25.01", 00:23:51.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.569 "oacs": { 00:23:51.569 "security": 0, 00:23:51.569 "format": 0, 00:23:51.569 "firmware": 0, 00:23:51.569 "ns_manage": 0 00:23:51.569 }, 00:23:51.569 "multi_ctrlr": true, 00:23:51.569 "ana_reporting": false 00:23:51.569 }, 00:23:51.569 "vs": { 00:23:51.569 "nvme_version": "1.3" 00:23:51.569 }, 00:23:51.569 "ns_data": { 00:23:51.569 "id": 1, 00:23:51.569 "can_share": true 00:23:51.569 } 00:23:51.569 } 00:23:51.569 ], 00:23:51.569 "mp_policy": "active_passive" 00:23:51.569 } 00:23:51.569 } 00:23:51.569 ] 00:23:51.569 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.569 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:51.569 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.569 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.569 [2024-11-06 13:48:14.840327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:51.569 [2024-11-06 13:48:14.840388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142bf60 (9): Bad file descriptor 00:23:51.830 [2024-11-06 13:48:14.972840] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:51.830 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.830 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.830 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.830 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.830 [ 00:23:51.830 { 00:23:51.830 "name": "nvme0n1", 00:23:51.830 "aliases": [ 00:23:51.830 "a5c1758c-dd03-40c3-bf24-c11a1f6a656c" 00:23:51.830 ], 00:23:51.830 "product_name": "NVMe disk", 00:23:51.830 "block_size": 512, 00:23:51.830 "num_blocks": 2097152, 00:23:51.830 "uuid": "a5c1758c-dd03-40c3-bf24-c11a1f6a656c", 00:23:51.830 "numa_id": 0, 00:23:51.830 "assigned_rate_limits": { 00:23:51.830 "rw_ios_per_sec": 0, 00:23:51.830 "rw_mbytes_per_sec": 0, 00:23:51.830 "r_mbytes_per_sec": 0, 00:23:51.830 "w_mbytes_per_sec": 0 00:23:51.830 }, 00:23:51.830 "claimed": false, 00:23:51.830 "zoned": false, 00:23:51.830 "supported_io_types": { 00:23:51.830 "read": true, 00:23:51.830 "write": true, 00:23:51.830 "unmap": false, 00:23:51.830 "flush": true, 00:23:51.830 "reset": true, 00:23:51.830 "nvme_admin": true, 00:23:51.830 "nvme_io": true, 00:23:51.830 "nvme_io_md": false, 00:23:51.830 "write_zeroes": true, 00:23:51.830 "zcopy": false, 00:23:51.830 "get_zone_info": false, 00:23:51.830 "zone_management": false, 00:23:51.830 "zone_append": false, 00:23:51.830 "compare": true, 00:23:51.830 "compare_and_write": true, 00:23:51.830 "abort": true, 00:23:51.830 "seek_hole": false, 00:23:51.830 "seek_data": false, 00:23:51.830 "copy": true, 00:23:51.830 "nvme_iov_md": false 00:23:51.830 }, 00:23:51.830 "memory_domains": [ 00:23:51.830 { 00:23:51.830 "dma_device_id": "system", 00:23:51.830 "dma_device_type": 1 00:23:51.830 } 00:23:51.830 ], 00:23:51.830 "driver_specific": { 00:23:51.830 "nvme": [ 00:23:51.830 { 00:23:51.830 "trid": { 00:23:51.830 "trtype": "TCP", 00:23:51.830 "adrfam": "IPv4", 00:23:51.830 "traddr": "10.0.0.2", 00:23:51.830 "trsvcid": "4420", 00:23:51.830 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.830 }, 00:23:51.830 "ctrlr_data": { 00:23:51.830 "cntlid": 2, 00:23:51.830 "vendor_id": "0x8086", 00:23:51.830 "model_number": "SPDK bdev Controller", 00:23:51.830 "serial_number": "00000000000000000000", 00:23:51.830 "firmware_revision": "25.01", 00:23:51.830 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.830 "oacs": { 00:23:51.830 "security": 0, 00:23:51.830 "format": 0, 00:23:51.830 "firmware": 0, 00:23:51.830 "ns_manage": 0 00:23:51.830 }, 00:23:51.830 "multi_ctrlr": true, 00:23:51.830 "ana_reporting": false 00:23:51.830 }, 00:23:51.830 "vs": { 00:23:51.830 "nvme_version": "1.3" 00:23:51.830 }, 00:23:51.830 "ns_data": { 00:23:51.830 "id": 1, 00:23:51.830 "can_share": true 00:23:51.830 } 00:23:51.830 } 00:23:51.830 ], 00:23:51.830 "mp_policy": "active_passive" 00:23:51.830 } 00:23:51.830 } 00:23:51.830 ] 00:23:51.830 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.830 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.830 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.830 13:48:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.la1occ03K3 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.la1occ03K3 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.la1occ03K3 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.830 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.831 [2024-11-06 13:48:15.040956] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.831 [2024-11-06 13:48:15.041069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.831 [2024-11-06 13:48:15.057018] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.831 nvme0n1 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.831 [ 00:23:51.831 { 00:23:51.831 "name": "nvme0n1", 00:23:51.831 "aliases": [ 00:23:51.831 "a5c1758c-dd03-40c3-bf24-c11a1f6a656c" 00:23:51.831 ], 00:23:51.831 "product_name": "NVMe disk", 00:23:51.831 "block_size": 512, 00:23:51.831 "num_blocks": 2097152, 00:23:51.831 "uuid": "a5c1758c-dd03-40c3-bf24-c11a1f6a656c", 00:23:51.831 "numa_id": 0, 00:23:51.831 "assigned_rate_limits": { 00:23:51.831 "rw_ios_per_sec": 0, 00:23:51.831 "rw_mbytes_per_sec": 0, 00:23:51.831 "r_mbytes_per_sec": 0, 00:23:51.831 "w_mbytes_per_sec": 0 00:23:51.831 }, 00:23:51.831 "claimed": false, 00:23:51.831 "zoned": false, 00:23:51.831 "supported_io_types": { 00:23:51.831 "read": true, 00:23:51.831 "write": true, 00:23:51.831 "unmap": false, 00:23:51.831 "flush": true, 00:23:51.831 "reset": true, 00:23:51.831 "nvme_admin": true, 00:23:51.831 "nvme_io": true, 00:23:51.831 "nvme_io_md": false, 00:23:51.831 "write_zeroes": true, 00:23:51.831 "zcopy": false, 00:23:51.831 "get_zone_info": false, 00:23:51.831 "zone_management": false, 00:23:51.831 "zone_append": false, 00:23:51.831 "compare": true, 00:23:51.831 "compare_and_write": true, 00:23:51.831 "abort": true, 00:23:51.831 "seek_hole": false, 00:23:51.831 "seek_data": false, 00:23:51.831 "copy": true, 00:23:51.831 "nvme_iov_md": false 00:23:51.831 }, 00:23:51.831 "memory_domains": [ 00:23:51.831 { 00:23:51.831 "dma_device_id": "system", 00:23:51.831 "dma_device_type": 1 00:23:51.831 } 00:23:51.831 ], 00:23:51.831 "driver_specific": { 00:23:51.831 "nvme": [ 00:23:51.831 { 00:23:51.831 "trid": { 00:23:51.831 "trtype": "TCP", 00:23:51.831 "adrfam": "IPv4", 00:23:51.831 "traddr": "10.0.0.2", 00:23:51.831 "trsvcid": "4421", 00:23:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.831 }, 00:23:51.831 "ctrlr_data": { 00:23:51.831 "cntlid": 3, 00:23:51.831 "vendor_id": "0x8086", 00:23:51.831 "model_number": "SPDK bdev Controller", 00:23:51.831 "serial_number": "00000000000000000000", 00:23:51.831 "firmware_revision": "25.01", 00:23:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.831 "oacs": { 00:23:51.831 "security": 0, 00:23:51.831 "format": 0, 00:23:51.831 "firmware": 0, 00:23:51.831 "ns_manage": 0 00:23:51.831 }, 00:23:51.831 "multi_ctrlr": true, 00:23:51.831 "ana_reporting": false 00:23:51.831 }, 00:23:51.831 "vs": { 00:23:51.831 "nvme_version": "1.3" 00:23:51.831 }, 00:23:51.831 "ns_data": { 00:23:51.831 "id": 1, 00:23:51.831 "can_share": true 00:23:51.831 } 00:23:51.831 } 00:23:51.831 ], 00:23:51.831 "mp_policy": "active_passive" 00:23:51.831 } 00:23:51.831 } 00:23:51.831 ] 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.la1occ03K3 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.831 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.831 rmmod nvme_tcp 00:23:51.831 rmmod nvme_fabrics 00:23:51.831 rmmod nvme_keyring 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 731795 ']' 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 731795 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 731795 ']' 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 731795 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 731795 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 731795' 00:23:52.093 killing process with pid 731795 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 731795 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 731795 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.093 13:48:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.640 00:23:54.640 real 0m11.694s 00:23:54.640 user 0m4.033s 00:23:54.640 sys 0m6.152s 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.640 ************************************ 00:23:54.640 END TEST nvmf_async_init 00:23:54.640 ************************************ 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.640 ************************************ 00:23:54.640 START TEST dma 00:23:54.640 ************************************ 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:54.640 * Looking for test storage... 00:23:54.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:54.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.640 --rc genhtml_branch_coverage=1 00:23:54.640 --rc genhtml_function_coverage=1 00:23:54.640 --rc genhtml_legend=1 00:23:54.640 --rc geninfo_all_blocks=1 00:23:54.640 --rc geninfo_unexecuted_blocks=1 00:23:54.640 00:23:54.640 ' 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:54.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.640 --rc genhtml_branch_coverage=1 00:23:54.640 --rc genhtml_function_coverage=1 00:23:54.640 --rc genhtml_legend=1 00:23:54.640 --rc geninfo_all_blocks=1 00:23:54.640 --rc geninfo_unexecuted_blocks=1 00:23:54.640 00:23:54.640 ' 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:54.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.640 --rc genhtml_branch_coverage=1 00:23:54.640 --rc genhtml_function_coverage=1 00:23:54.640 --rc genhtml_legend=1 00:23:54.640 --rc geninfo_all_blocks=1 00:23:54.640 --rc geninfo_unexecuted_blocks=1 00:23:54.640 00:23:54.640 ' 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:54.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.640 --rc genhtml_branch_coverage=1 00:23:54.640 --rc genhtml_function_coverage=1 00:23:54.640 --rc genhtml_legend=1 00:23:54.640 --rc geninfo_all_blocks=1 00:23:54.640 --rc geninfo_unexecuted_blocks=1 00:23:54.640 00:23:54.640 ' 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.640 13:48:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:54.641 00:23:54.641 real 0m0.227s 00:23:54.641 user 0m0.138s 00:23:54.641 sys 0m0.101s 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:54.641 ************************************ 00:23:54.641 END TEST dma 00:23:54.641 ************************************ 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.641 ************************************ 00:23:54.641 START TEST nvmf_identify 00:23:54.641 ************************************ 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:54.641 * Looking for test storage... 00:23:54.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:54.641 13:48:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.902 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:54.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.903 --rc genhtml_branch_coverage=1 00:23:54.903 --rc genhtml_function_coverage=1 00:23:54.903 --rc genhtml_legend=1 00:23:54.903 --rc geninfo_all_blocks=1 00:23:54.903 --rc geninfo_unexecuted_blocks=1 00:23:54.903 00:23:54.903 ' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:54.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.903 --rc genhtml_branch_coverage=1 00:23:54.903 --rc genhtml_function_coverage=1 00:23:54.903 --rc genhtml_legend=1 00:23:54.903 --rc geninfo_all_blocks=1 00:23:54.903 --rc geninfo_unexecuted_blocks=1 00:23:54.903 00:23:54.903 ' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:54.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.903 --rc genhtml_branch_coverage=1 00:23:54.903 --rc genhtml_function_coverage=1 00:23:54.903 --rc genhtml_legend=1 00:23:54.903 --rc geninfo_all_blocks=1 00:23:54.903 --rc geninfo_unexecuted_blocks=1 00:23:54.903 00:23:54.903 ' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:54.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.903 --rc genhtml_branch_coverage=1 00:23:54.903 --rc genhtml_function_coverage=1 00:23:54.903 --rc genhtml_legend=1 00:23:54.903 --rc geninfo_all_blocks=1 00:23:54.903 --rc geninfo_unexecuted_blocks=1 00:23:54.903 00:23:54.903 ' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.903 13:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.045 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:03.046 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:03.046 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:03.046 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:03.046 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:24:03.046 00:24:03.046 --- 10.0.0.2 ping statistics --- 00:24:03.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.046 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:24:03.046 00:24:03.046 --- 10.0.0.1 ping statistics --- 00:24:03.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.046 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=736444 00:24:03.046 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.047 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.047 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 736444 00:24:03.047 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 736444 ']' 00:24:03.047 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.047 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:03.047 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.047 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:03.047 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.047 [2024-11-06 13:48:25.452941] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:24:03.047 [2024-11-06 13:48:25.453007] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.047 [2024-11-06 13:48:25.536537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.047 [2024-11-06 13:48:25.580804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.047 [2024-11-06 13:48:25.580837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.047 [2024-11-06 13:48:25.580846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.047 [2024-11-06 13:48:25.580852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.047 [2024-11-06 13:48:25.580858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.047 [2024-11-06 13:48:25.582448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.047 [2024-11-06 13:48:25.582564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.047 [2024-11-06 13:48:25.582701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.047 [2024-11-06 13:48:25.582702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.047 [2024-11-06 13:48:26.269585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.047 Malloc0 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.047 [2024-11-06 13:48:26.382100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.047 [ 00:24:03.047 { 00:24:03.047 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:03.047 "subtype": "Discovery", 00:24:03.047 "listen_addresses": [ 00:24:03.047 { 00:24:03.047 "trtype": "TCP", 00:24:03.047 "adrfam": "IPv4", 00:24:03.047 "traddr": "10.0.0.2", 00:24:03.047 "trsvcid": "4420" 00:24:03.047 } 00:24:03.047 ], 00:24:03.047 "allow_any_host": true, 00:24:03.047 "hosts": [] 00:24:03.047 }, 00:24:03.047 { 00:24:03.047 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.047 "subtype": "NVMe", 00:24:03.047 "listen_addresses": [ 00:24:03.047 { 00:24:03.047 "trtype": "TCP", 00:24:03.047 "adrfam": "IPv4", 00:24:03.047 "traddr": "10.0.0.2", 00:24:03.047 "trsvcid": "4420" 00:24:03.047 } 00:24:03.047 ], 00:24:03.047 "allow_any_host": true, 00:24:03.047 "hosts": [], 00:24:03.047 "serial_number": "SPDK00000000000001", 00:24:03.047 "model_number": "SPDK bdev Controller", 00:24:03.047 "max_namespaces": 32, 00:24:03.047 "min_cntlid": 1, 00:24:03.047 "max_cntlid": 65519, 00:24:03.047 "namespaces": [ 00:24:03.047 { 00:24:03.047 "nsid": 1, 00:24:03.047 "bdev_name": "Malloc0", 00:24:03.047 "name": "Malloc0", 00:24:03.047 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:03.047 "eui64": "ABCDEF0123456789", 00:24:03.047 "uuid": "35e82719-124f-4a96-aba1-f00c7b608000" 00:24:03.047 } 00:24:03.047 ] 00:24:03.047 } 00:24:03.047 ] 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.047 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:03.309 [2024-11-06 13:48:26.446654] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:24:03.309 [2024-11-06 13:48:26.446724] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736705 ] 00:24:03.309 [2024-11-06 13:48:26.501842] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:03.309 [2024-11-06 13:48:26.501890] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:03.309 [2024-11-06 13:48:26.501896] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:03.309 [2024-11-06 13:48:26.501908] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:03.309 [2024-11-06 13:48:26.501917] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:03.309 [2024-11-06 13:48:26.502607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:03.309 [2024-11-06 13:48:26.502638] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c9d690 0 00:24:03.309 [2024-11-06 13:48:26.512760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:03.309 [2024-11-06 13:48:26.512773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:03.309 [2024-11-06 13:48:26.512777] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:03.309 [2024-11-06 13:48:26.512781] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:03.309 [2024-11-06 13:48:26.512812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.309 [2024-11-06 13:48:26.512817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.309 [2024-11-06 13:48:26.512821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.309 [2024-11-06 13:48:26.512833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:03.309 [2024-11-06 13:48:26.512851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.309 [2024-11-06 13:48:26.520759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.309 [2024-11-06 13:48:26.520769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.309 [2024-11-06 13:48:26.520773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.309 [2024-11-06 13:48:26.520777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.309 [2024-11-06 13:48:26.520786] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:03.309 [2024-11-06 13:48:26.520793] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:03.309 [2024-11-06 13:48:26.520799] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:03.309 [2024-11-06 13:48:26.520811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.309 [2024-11-06 13:48:26.520816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.309 [2024-11-06 13:48:26.520819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.309 [2024-11-06 13:48:26.520827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.309 [2024-11-06 13:48:26.520841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.309 [2024-11-06 13:48:26.521010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.309 [2024-11-06 13:48:26.521017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.309 [2024-11-06 13:48:26.521020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.309 [2024-11-06 13:48:26.521024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.309 [2024-11-06 13:48:26.521030] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:03.309 [2024-11-06 13:48:26.521037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:03.309 [2024-11-06 13:48:26.521048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.309 [2024-11-06 13:48:26.521052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.309 [2024-11-06 13:48:26.521055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.309 [2024-11-06 13:48:26.521062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.309 [2024-11-06 13:48:26.521073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.309 [2024-11-06 13:48:26.521271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.309 [2024-11-06 13:48:26.521277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.309 [2024-11-06 13:48:26.521281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.309 [2024-11-06 13:48:26.521285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.309 [2024-11-06 13:48:26.521290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:03.309 [2024-11-06 13:48:26.521298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:03.309 [2024-11-06 13:48:26.521304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.521308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.521312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.521318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.310 [2024-11-06 13:48:26.521329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.310 [2024-11-06 13:48:26.521548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.310 [2024-11-06 13:48:26.521555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.310 [2024-11-06 13:48:26.521558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.521562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.310 [2024-11-06 13:48:26.521567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:03.310 [2024-11-06 13:48:26.521576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.521580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.521583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.521590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.310 [2024-11-06 13:48:26.521600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.310 [2024-11-06 13:48:26.521795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.310 [2024-11-06 13:48:26.521802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.310 [2024-11-06 13:48:26.521805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.521809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.310 [2024-11-06 13:48:26.521814] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:03.310 [2024-11-06 13:48:26.521819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:03.310 [2024-11-06 13:48:26.521826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:03.310 [2024-11-06 13:48:26.521939] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:03.310 [2024-11-06 13:48:26.521944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:03.310 [2024-11-06 13:48:26.521953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.521957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.521960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.521967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.310 [2024-11-06 13:48:26.521978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.310 [2024-11-06 13:48:26.522179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.310 [2024-11-06 13:48:26.522186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.310 [2024-11-06 13:48:26.522189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.522193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.310 [2024-11-06 13:48:26.522198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:03.310 [2024-11-06 13:48:26.522207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.522211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.522214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.522221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.310 [2024-11-06 13:48:26.522231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.310 [2024-11-06 13:48:26.522426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.310 [2024-11-06 13:48:26.522432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.310 [2024-11-06 13:48:26.522435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.522439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.310 [2024-11-06 13:48:26.522444] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:03.310 [2024-11-06 13:48:26.522448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:03.310 [2024-11-06 13:48:26.522456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:03.310 [2024-11-06 13:48:26.522463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:03.310 [2024-11-06 13:48:26.522472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.522476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.522483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.310 [2024-11-06 13:48:26.522493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.310 [2024-11-06 13:48:26.522687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.310 [2024-11-06 13:48:26.522694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.310 [2024-11-06 13:48:26.522697] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.522704] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d690): datao=0, datal=4096, cccid=0 00:24:03.310 [2024-11-06 13:48:26.522708] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cff100) on tqpair(0x1c9d690): expected_datao=0, payload_size=4096 00:24:03.310 [2024-11-06 13:48:26.522713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.522728] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.522733] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.562918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.310 [2024-11-06 13:48:26.562928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.310 [2024-11-06 13:48:26.562932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.562936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.310 [2024-11-06 13:48:26.562944] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:03.310 [2024-11-06 13:48:26.562949] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:03.310 [2024-11-06 13:48:26.562953] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:03.310 [2024-11-06 13:48:26.562962] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:03.310 [2024-11-06 13:48:26.562966] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:03.310 [2024-11-06 13:48:26.562971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:03.310 [2024-11-06 13:48:26.562981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:03.310 [2024-11-06 13:48:26.562989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.562993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.562997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.563004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.310 [2024-11-06 13:48:26.563016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.310 [2024-11-06 13:48:26.563237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.310 [2024-11-06 13:48:26.563243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.310 [2024-11-06 13:48:26.563247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.563251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.310 [2024-11-06 13:48:26.563258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.563262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.563266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.563272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.310 [2024-11-06 13:48:26.563278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.563282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.563286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.563292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.310 [2024-11-06 13:48:26.563300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.563304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.563308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.563314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.310 [2024-11-06 13:48:26.563321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.563324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.310 [2024-11-06 13:48:26.563328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.310 [2024-11-06 13:48:26.563334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.310 [2024-11-06 13:48:26.563339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:03.310 [2024-11-06 13:48:26.563347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:03.310 [2024-11-06 13:48:26.563354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.563357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d690) 00:24:03.311 [2024-11-06 13:48:26.563364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.311 [2024-11-06 13:48:26.563376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff100, cid 0, qid 0 00:24:03.311 [2024-11-06 13:48:26.563381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff280, cid 1, qid 0 00:24:03.311 [2024-11-06 13:48:26.563386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff400, cid 2, qid 0 00:24:03.311 [2024-11-06 13:48:26.563391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.311 [2024-11-06 13:48:26.563396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff700, cid 4, qid 0 00:24:03.311 [2024-11-06 13:48:26.563631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.311 [2024-11-06 13:48:26.563638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.311 [2024-11-06 13:48:26.563641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.563645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff700) on tqpair=0x1c9d690 00:24:03.311 [2024-11-06 13:48:26.563652] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:03.311 [2024-11-06 13:48:26.563657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:03.311 [2024-11-06 13:48:26.563668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.563672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d690) 00:24:03.311 [2024-11-06 13:48:26.563679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.311 [2024-11-06 13:48:26.563689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff700, cid 4, qid 0 00:24:03.311 [2024-11-06 13:48:26.563863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.311 [2024-11-06 13:48:26.563869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.311 [2024-11-06 13:48:26.563873] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.563877] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d690): datao=0, datal=4096, cccid=4 00:24:03.311 [2024-11-06 13:48:26.563882] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cff700) on tqpair(0x1c9d690): expected_datao=0, payload_size=4096 00:24:03.311 [2024-11-06 13:48:26.563888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.563933] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.563936] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.311 [2024-11-06 13:48:26.564073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.311 [2024-11-06 13:48:26.564077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff700) on tqpair=0x1c9d690 00:24:03.311 [2024-11-06 13:48:26.564092] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:03.311 [2024-11-06 13:48:26.564111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d690) 00:24:03.311 [2024-11-06 13:48:26.564123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.311 [2024-11-06 13:48:26.564129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9d690) 00:24:03.311 [2024-11-06 13:48:26.564143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.311 [2024-11-06 13:48:26.564157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff700, cid 4, qid 0 00:24:03.311 [2024-11-06 13:48:26.564162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff880, cid 5, qid 0 00:24:03.311 [2024-11-06 13:48:26.564410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.311 [2024-11-06 13:48:26.564416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.311 [2024-11-06 13:48:26.564419] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564423] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d690): datao=0, datal=1024, cccid=4 00:24:03.311 [2024-11-06 13:48:26.564428] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cff700) on tqpair(0x1c9d690): expected_datao=0, payload_size=1024 00:24:03.311 [2024-11-06 13:48:26.564432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564439] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564442] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.311 [2024-11-06 13:48:26.564454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.311 [2024-11-06 13:48:26.564457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.564461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff880) on tqpair=0x1c9d690 00:24:03.311 [2024-11-06 13:48:26.608756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.311 [2024-11-06 13:48:26.608767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.311 [2024-11-06 13:48:26.608770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.608774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff700) on tqpair=0x1c9d690 00:24:03.311 [2024-11-06 13:48:26.608785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.608789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d690) 00:24:03.311 [2024-11-06 13:48:26.608796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.311 [2024-11-06 13:48:26.608815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff700, cid 4, qid 0 00:24:03.311 [2024-11-06 13:48:26.609046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.311 [2024-11-06 13:48:26.609053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.311 [2024-11-06 13:48:26.609056] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.609060] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d690): datao=0, datal=3072, cccid=4 00:24:03.311 [2024-11-06 13:48:26.609064] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cff700) on tqpair(0x1c9d690): expected_datao=0, payload_size=3072 00:24:03.311 [2024-11-06 13:48:26.609069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.609086] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.609090] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.649920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.311 [2024-11-06 13:48:26.649929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.311 [2024-11-06 13:48:26.649932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.649936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff700) on tqpair=0x1c9d690 00:24:03.311 [2024-11-06 13:48:26.649945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.649949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9d690) 00:24:03.311 [2024-11-06 13:48:26.649956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.311 [2024-11-06 13:48:26.649970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff700, cid 4, qid 0 00:24:03.311 [2024-11-06 13:48:26.650174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.311 [2024-11-06 13:48:26.650180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.311 [2024-11-06 13:48:26.650183] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.650187] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9d690): datao=0, datal=8, cccid=4 00:24:03.311 [2024-11-06 13:48:26.650192] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cff700) on tqpair(0x1c9d690): expected_datao=0, payload_size=8 00:24:03.311 [2024-11-06 13:48:26.650196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.650203] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.311 [2024-11-06 13:48:26.650206] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.576 [2024-11-06 13:48:26.691755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.576 [2024-11-06 13:48:26.691767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.576 [2024-11-06 13:48:26.691771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.576 [2024-11-06 13:48:26.691775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff700) on tqpair=0x1c9d690 00:24:03.576 ===================================================== 00:24:03.576 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:03.576 ===================================================== 00:24:03.576 Controller Capabilities/Features 00:24:03.576 ================================ 00:24:03.576 Vendor ID: 0000 00:24:03.576 Subsystem Vendor ID: 0000 00:24:03.576 Serial Number: .................... 00:24:03.576 Model Number: ........................................ 00:24:03.576 Firmware Version: 25.01 00:24:03.576 Recommended Arb Burst: 0 00:24:03.576 IEEE OUI Identifier: 00 00 00 00:24:03.576 Multi-path I/O 00:24:03.576 May have multiple subsystem ports: No 00:24:03.576 May have multiple controllers: No 00:24:03.576 Associated with SR-IOV VF: No 00:24:03.576 Max Data Transfer Size: 131072 00:24:03.576 Max Number of Namespaces: 0 00:24:03.576 Max Number of I/O Queues: 1024 00:24:03.576 NVMe Specification Version (VS): 1.3 00:24:03.576 NVMe Specification Version (Identify): 1.3 00:24:03.576 Maximum Queue Entries: 128 00:24:03.576 Contiguous Queues Required: Yes 00:24:03.576 Arbitration Mechanisms Supported 00:24:03.576 Weighted Round Robin: Not Supported 00:24:03.576 Vendor Specific: Not Supported 00:24:03.576 Reset Timeout: 15000 ms 00:24:03.576 Doorbell Stride: 4 bytes 00:24:03.576 NVM Subsystem Reset: Not Supported 00:24:03.576 Command Sets Supported 00:24:03.576 NVM Command Set: Supported 00:24:03.576 Boot Partition: Not Supported 00:24:03.576 Memory Page Size Minimum: 4096 bytes 00:24:03.576 Memory Page Size Maximum: 4096 bytes 00:24:03.576 Persistent Memory Region: Not Supported 00:24:03.576 Optional Asynchronous Events Supported 00:24:03.576 Namespace Attribute Notices: Not Supported 00:24:03.576 Firmware Activation Notices: Not Supported 00:24:03.576 ANA Change Notices: Not Supported 00:24:03.576 PLE Aggregate Log Change Notices: Not Supported 00:24:03.576 LBA Status Info Alert Notices: Not Supported 00:24:03.576 EGE Aggregate Log Change Notices: Not Supported 00:24:03.576 Normal NVM Subsystem Shutdown event: Not Supported 00:24:03.576 Zone Descriptor Change Notices: Not Supported 00:24:03.576 Discovery Log Change Notices: Supported 00:24:03.576 Controller Attributes 00:24:03.576 128-bit Host Identifier: Not Supported 00:24:03.576 Non-Operational Permissive Mode: Not Supported 00:24:03.576 NVM Sets: Not Supported 00:24:03.576 Read Recovery Levels: Not Supported 00:24:03.576 Endurance Groups: Not Supported 00:24:03.576 Predictable Latency Mode: Not Supported 00:24:03.576 Traffic Based Keep ALive: Not Supported 00:24:03.576 Namespace Granularity: Not Supported 00:24:03.576 SQ Associations: Not Supported 00:24:03.576 UUID List: Not Supported 00:24:03.576 Multi-Domain Subsystem: Not Supported 00:24:03.576 Fixed Capacity Management: Not Supported 00:24:03.576 Variable Capacity Management: Not Supported 00:24:03.576 Delete Endurance Group: Not Supported 00:24:03.576 Delete NVM Set: Not Supported 00:24:03.576 Extended LBA Formats Supported: Not Supported 00:24:03.576 Flexible Data Placement Supported: Not Supported 00:24:03.576 00:24:03.576 Controller Memory Buffer Support 00:24:03.576 ================================ 00:24:03.576 Supported: No 00:24:03.576 00:24:03.576 Persistent Memory Region Support 00:24:03.576 ================================ 00:24:03.576 Supported: No 00:24:03.576 00:24:03.576 Admin Command Set Attributes 00:24:03.576 ============================ 00:24:03.576 Security Send/Receive: Not Supported 00:24:03.576 Format NVM: Not Supported 00:24:03.576 Firmware Activate/Download: Not Supported 00:24:03.576 Namespace Management: Not Supported 00:24:03.576 Device Self-Test: Not Supported 00:24:03.576 Directives: Not Supported 00:24:03.576 NVMe-MI: Not Supported 00:24:03.576 Virtualization Management: Not Supported 00:24:03.576 Doorbell Buffer Config: Not Supported 00:24:03.576 Get LBA Status Capability: Not Supported 00:24:03.577 Command & Feature Lockdown Capability: Not Supported 00:24:03.577 Abort Command Limit: 1 00:24:03.577 Async Event Request Limit: 4 00:24:03.577 Number of Firmware Slots: N/A 00:24:03.577 Firmware Slot 1 Read-Only: N/A 00:24:03.577 Firmware Activation Without Reset: N/A 00:24:03.577 Multiple Update Detection Support: N/A 00:24:03.577 Firmware Update Granularity: No Information Provided 00:24:03.577 Per-Namespace SMART Log: No 00:24:03.577 Asymmetric Namespace Access Log Page: Not Supported 00:24:03.577 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:03.577 Command Effects Log Page: Not Supported 00:24:03.577 Get Log Page Extended Data: Supported 00:24:03.577 Telemetry Log Pages: Not Supported 00:24:03.577 Persistent Event Log Pages: Not Supported 00:24:03.577 Supported Log Pages Log Page: May Support 00:24:03.577 Commands Supported & Effects Log Page: Not Supported 00:24:03.577 Feature Identifiers & Effects Log Page:May Support 00:24:03.577 NVMe-MI Commands & Effects Log Page: May Support 00:24:03.577 Data Area 4 for Telemetry Log: Not Supported 00:24:03.577 Error Log Page Entries Supported: 128 00:24:03.577 Keep Alive: Not Supported 00:24:03.577 00:24:03.577 NVM Command Set Attributes 00:24:03.577 ========================== 00:24:03.577 Submission Queue Entry Size 00:24:03.577 Max: 1 00:24:03.577 Min: 1 00:24:03.577 Completion Queue Entry Size 00:24:03.577 Max: 1 00:24:03.577 Min: 1 00:24:03.577 Number of Namespaces: 0 00:24:03.577 Compare Command: Not Supported 00:24:03.577 Write Uncorrectable Command: Not Supported 00:24:03.577 Dataset Management Command: Not Supported 00:24:03.577 Write Zeroes Command: Not Supported 00:24:03.577 Set Features Save Field: Not Supported 00:24:03.577 Reservations: Not Supported 00:24:03.577 Timestamp: Not Supported 00:24:03.577 Copy: Not Supported 00:24:03.577 Volatile Write Cache: Not Present 00:24:03.577 Atomic Write Unit (Normal): 1 00:24:03.577 Atomic Write Unit (PFail): 1 00:24:03.577 Atomic Compare & Write Unit: 1 00:24:03.577 Fused Compare & Write: Supported 00:24:03.577 Scatter-Gather List 00:24:03.577 SGL Command Set: Supported 00:24:03.577 SGL Keyed: Supported 00:24:03.577 SGL Bit Bucket Descriptor: Not Supported 00:24:03.577 SGL Metadata Pointer: Not Supported 00:24:03.577 Oversized SGL: Not Supported 00:24:03.577 SGL Metadata Address: Not Supported 00:24:03.577 SGL Offset: Supported 00:24:03.577 Transport SGL Data Block: Not Supported 00:24:03.577 Replay Protected Memory Block: Not Supported 00:24:03.577 00:24:03.577 Firmware Slot Information 00:24:03.577 ========================= 00:24:03.577 Active slot: 0 00:24:03.577 00:24:03.577 00:24:03.577 Error Log 00:24:03.577 ========= 00:24:03.577 00:24:03.577 Active Namespaces 00:24:03.577 ================= 00:24:03.577 Discovery Log Page 00:24:03.577 ================== 00:24:03.577 Generation Counter: 2 00:24:03.577 Number of Records: 2 00:24:03.577 Record Format: 0 00:24:03.577 00:24:03.577 Discovery Log Entry 0 00:24:03.577 ---------------------- 00:24:03.577 Transport Type: 3 (TCP) 00:24:03.577 Address Family: 1 (IPv4) 00:24:03.577 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:03.577 Entry Flags: 00:24:03.577 Duplicate Returned Information: 1 00:24:03.577 Explicit Persistent Connection Support for Discovery: 1 00:24:03.577 Transport Requirements: 00:24:03.577 Secure Channel: Not Required 00:24:03.577 Port ID: 0 (0x0000) 00:24:03.577 Controller ID: 65535 (0xffff) 00:24:03.577 Admin Max SQ Size: 128 00:24:03.577 Transport Service Identifier: 4420 00:24:03.577 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:03.577 Transport Address: 10.0.0.2 00:24:03.577 Discovery Log Entry 1 00:24:03.577 ---------------------- 00:24:03.577 Transport Type: 3 (TCP) 00:24:03.577 Address Family: 1 (IPv4) 00:24:03.577 Subsystem Type: 2 (NVM Subsystem) 00:24:03.577 Entry Flags: 00:24:03.577 Duplicate Returned Information: 0 00:24:03.577 Explicit Persistent Connection Support for Discovery: 0 00:24:03.577 Transport Requirements: 00:24:03.577 Secure Channel: Not Required 00:24:03.577 Port ID: 0 (0x0000) 00:24:03.577 Controller ID: 65535 (0xffff) 00:24:03.577 Admin Max SQ Size: 128 00:24:03.577 Transport Service Identifier: 4420 00:24:03.577 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:03.577 Transport Address: 10.0.0.2 [2024-11-06 13:48:26.691867] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:03.577 [2024-11-06 13:48:26.691878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff100) on tqpair=0x1c9d690 00:24:03.577 [2024-11-06 13:48:26.691885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.577 [2024-11-06 13:48:26.691890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff280) on tqpair=0x1c9d690 00:24:03.577 [2024-11-06 13:48:26.691895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.577 [2024-11-06 13:48:26.691900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff400) on tqpair=0x1c9d690 00:24:03.577 [2024-11-06 13:48:26.691905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.577 [2024-11-06 13:48:26.691912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.577 [2024-11-06 13:48:26.691916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.577 [2024-11-06 13:48:26.691928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.691932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.691935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.577 [2024-11-06 13:48:26.691943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.577 [2024-11-06 13:48:26.691958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.577 [2024-11-06 13:48:26.692136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.577 [2024-11-06 13:48:26.692143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.577 [2024-11-06 13:48:26.692147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.577 [2024-11-06 13:48:26.692158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.577 [2024-11-06 13:48:26.692172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.577 [2024-11-06 13:48:26.692185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.577 [2024-11-06 13:48:26.692401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.577 [2024-11-06 13:48:26.692407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.577 [2024-11-06 13:48:26.692410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.577 [2024-11-06 13:48:26.692419] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:03.577 [2024-11-06 13:48:26.692424] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:03.577 [2024-11-06 13:48:26.692433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.577 [2024-11-06 13:48:26.692448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.577 [2024-11-06 13:48:26.692458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.577 [2024-11-06 13:48:26.692657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.577 [2024-11-06 13:48:26.692663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.577 [2024-11-06 13:48:26.692667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.577 [2024-11-06 13:48:26.692681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.577 [2024-11-06 13:48:26.692695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.577 [2024-11-06 13:48:26.692708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.577 [2024-11-06 13:48:26.692884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.577 [2024-11-06 13:48:26.692891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.577 [2024-11-06 13:48:26.692894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.577 [2024-11-06 13:48:26.692908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.577 [2024-11-06 13:48:26.692912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.692916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.692923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.692933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.693107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.693113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.693116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.693130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.693144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.693154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.693337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.693343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.693346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.693360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.693374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.693384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.693564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.693571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.693574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.693588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.693602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.693612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.693812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.693819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.693823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.693837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.693844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.693851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.693861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.694036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.694042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.694046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.694059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.694073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.694083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.694258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.694264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.694267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.694281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.694295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.694306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.694479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.694485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.694489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.694503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.694517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.694527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.694706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.694712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.694716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.694729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.694744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.694759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.694958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.694965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.694968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.694982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.694989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.694996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.695006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.695180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.695186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.695190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.695194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.695203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.695207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.695211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.695217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.695227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.695423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.695429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.695433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.695437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.695446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.695450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.695454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.578 [2024-11-06 13:48:26.695460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.578 [2024-11-06 13:48:26.695470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.578 [2024-11-06 13:48:26.695643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.578 [2024-11-06 13:48:26.695652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.578 [2024-11-06 13:48:26.695655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.695659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.578 [2024-11-06 13:48:26.695669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.578 [2024-11-06 13:48:26.695673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.695676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9d690) 00:24:03.579 [2024-11-06 13:48:26.695683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.579 [2024-11-06 13:48:26.695693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cff580, cid 3, qid 0 00:24:03.579 [2024-11-06 13:48:26.699754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.579 [2024-11-06 13:48:26.699764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.579 [2024-11-06 13:48:26.699767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.699771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cff580) on tqpair=0x1c9d690 00:24:03.579 [2024-11-06 13:48:26.699779] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:03.579 00:24:03.579 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:03.579 [2024-11-06 13:48:26.739200] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:24:03.579 [2024-11-06 13:48:26.739245] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736798 ] 00:24:03.579 [2024-11-06 13:48:26.792832] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:03.579 [2024-11-06 13:48:26.792882] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:03.579 [2024-11-06 13:48:26.792887] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:03.579 [2024-11-06 13:48:26.792899] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:03.579 [2024-11-06 13:48:26.792909] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:03.579 [2024-11-06 13:48:26.796952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:03.579 [2024-11-06 13:48:26.796979] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc57690 0 00:24:03.579 [2024-11-06 13:48:26.804760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:03.579 [2024-11-06 13:48:26.804772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:03.579 [2024-11-06 13:48:26.804777] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:03.579 [2024-11-06 13:48:26.804780] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:03.579 [2024-11-06 13:48:26.804808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.804814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.804818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.579 [2024-11-06 13:48:26.804829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:03.579 [2024-11-06 13:48:26.804849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.579 [2024-11-06 13:48:26.812758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.579 [2024-11-06 13:48:26.812768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.579 [2024-11-06 13:48:26.812771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.812776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.579 [2024-11-06 13:48:26.812784] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:03.579 [2024-11-06 13:48:26.812791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:03.579 [2024-11-06 13:48:26.812796] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:03.579 [2024-11-06 13:48:26.812808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.812812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.812816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.579 [2024-11-06 13:48:26.812824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.579 [2024-11-06 13:48:26.812838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.579 [2024-11-06 13:48:26.813008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.579 [2024-11-06 13:48:26.813015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.579 [2024-11-06 13:48:26.813018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.579 [2024-11-06 13:48:26.813027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:03.579 [2024-11-06 13:48:26.813035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:03.579 [2024-11-06 13:48:26.813041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.579 [2024-11-06 13:48:26.813056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.579 [2024-11-06 13:48:26.813067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.579 [2024-11-06 13:48:26.813307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.579 [2024-11-06 13:48:26.813313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.579 [2024-11-06 13:48:26.813317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.579 [2024-11-06 13:48:26.813326] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:03.579 [2024-11-06 13:48:26.813334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:03.579 [2024-11-06 13:48:26.813341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.579 [2024-11-06 13:48:26.813355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.579 [2024-11-06 13:48:26.813365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.579 [2024-11-06 13:48:26.813527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.579 [2024-11-06 13:48:26.813534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.579 [2024-11-06 13:48:26.813537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.579 [2024-11-06 13:48:26.813546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:03.579 [2024-11-06 13:48:26.813556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.579 [2024-11-06 13:48:26.813571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.579 [2024-11-06 13:48:26.813581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.579 [2024-11-06 13:48:26.813811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.579 [2024-11-06 13:48:26.813817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.579 [2024-11-06 13:48:26.813821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.579 [2024-11-06 13:48:26.813829] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:03.579 [2024-11-06 13:48:26.813834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:03.579 [2024-11-06 13:48:26.813842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:03.579 [2024-11-06 13:48:26.813950] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:03.579 [2024-11-06 13:48:26.813955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:03.579 [2024-11-06 13:48:26.813962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.813970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.579 [2024-11-06 13:48:26.813977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.579 [2024-11-06 13:48:26.813987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.579 [2024-11-06 13:48:26.814204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.579 [2024-11-06 13:48:26.814210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.579 [2024-11-06 13:48:26.814214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.814218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.579 [2024-11-06 13:48:26.814222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:03.579 [2024-11-06 13:48:26.814231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.814236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.814239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.579 [2024-11-06 13:48:26.814246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.579 [2024-11-06 13:48:26.814258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.579 [2024-11-06 13:48:26.814455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.579 [2024-11-06 13:48:26.814462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.579 [2024-11-06 13:48:26.814465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.579 [2024-11-06 13:48:26.814469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.580 [2024-11-06 13:48:26.814474] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:03.580 [2024-11-06 13:48:26.814478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:03.580 [2024-11-06 13:48:26.814486] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:03.580 [2024-11-06 13:48:26.814493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:03.580 [2024-11-06 13:48:26.814502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.814505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.580 [2024-11-06 13:48:26.814513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.580 [2024-11-06 13:48:26.814523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.580 [2024-11-06 13:48:26.814731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.580 [2024-11-06 13:48:26.814737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.580 [2024-11-06 13:48:26.814741] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.814750] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57690): datao=0, datal=4096, cccid=0 00:24:03.580 [2024-11-06 13:48:26.814755] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9100) on tqpair(0xc57690): expected_datao=0, payload_size=4096 00:24:03.580 [2024-11-06 13:48:26.814759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.814778] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.814782] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.814960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.580 [2024-11-06 13:48:26.814966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.580 [2024-11-06 13:48:26.814970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.814974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.580 [2024-11-06 13:48:26.814981] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:03.580 [2024-11-06 13:48:26.814986] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:03.580 [2024-11-06 13:48:26.814990] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:03.580 [2024-11-06 13:48:26.814999] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:03.580 [2024-11-06 13:48:26.815004] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:03.580 [2024-11-06 13:48:26.815008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:03.580 [2024-11-06 13:48:26.815018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:03.580 [2024-11-06 13:48:26.815025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.580 [2024-11-06 13:48:26.815045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.580 [2024-11-06 13:48:26.815056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.580 [2024-11-06 13:48:26.815226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.580 [2024-11-06 13:48:26.815232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.580 [2024-11-06 13:48:26.815236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.580 [2024-11-06 13:48:26.815247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc57690) 00:24:03.580 [2024-11-06 13:48:26.815260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.580 [2024-11-06 13:48:26.815266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc57690) 00:24:03.580 [2024-11-06 13:48:26.815280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.580 [2024-11-06 13:48:26.815286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc57690) 00:24:03.580 [2024-11-06 13:48:26.815299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.580 [2024-11-06 13:48:26.815305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.580 [2024-11-06 13:48:26.815318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.580 [2024-11-06 13:48:26.815323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:03.580 [2024-11-06 13:48:26.815331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:03.580 [2024-11-06 13:48:26.815337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57690) 00:24:03.580 [2024-11-06 13:48:26.815347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.580 [2024-11-06 13:48:26.815359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9100, cid 0, qid 0 00:24:03.580 [2024-11-06 13:48:26.815365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9280, cid 1, qid 0 00:24:03.580 [2024-11-06 13:48:26.815370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9400, cid 2, qid 0 00:24:03.580 [2024-11-06 13:48:26.815374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.580 [2024-11-06 13:48:26.815379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9700, cid 4, qid 0 00:24:03.580 [2024-11-06 13:48:26.815601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.580 [2024-11-06 13:48:26.815607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.580 [2024-11-06 13:48:26.815611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9700) on tqpair=0xc57690 00:24:03.580 [2024-11-06 13:48:26.815622] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:03.580 [2024-11-06 13:48:26.815627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:03.580 [2024-11-06 13:48:26.815635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:03.580 [2024-11-06 13:48:26.815640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:03.580 [2024-11-06 13:48:26.815647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.580 [2024-11-06 13:48:26.815654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57690) 00:24:03.580 [2024-11-06 13:48:26.815661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.580 [2024-11-06 13:48:26.815671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9700, cid 4, qid 0 00:24:03.580 [2024-11-06 13:48:26.815852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.580 [2024-11-06 13:48:26.815859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.581 [2024-11-06 13:48:26.815863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.815866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9700) on tqpair=0xc57690 00:24:03.581 [2024-11-06 13:48:26.815931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.815940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.815948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.815951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57690) 00:24:03.581 [2024-11-06 13:48:26.815958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.581 [2024-11-06 13:48:26.815969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9700, cid 4, qid 0 00:24:03.581 [2024-11-06 13:48:26.816160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.581 [2024-11-06 13:48:26.816167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.581 [2024-11-06 13:48:26.816170] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.816174] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57690): datao=0, datal=4096, cccid=4 00:24:03.581 [2024-11-06 13:48:26.816178] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9700) on tqpair(0xc57690): expected_datao=0, payload_size=4096 00:24:03.581 [2024-11-06 13:48:26.816183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.816204] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.816208] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.860755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.581 [2024-11-06 13:48:26.860765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.581 [2024-11-06 13:48:26.860771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.860775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9700) on tqpair=0xc57690 00:24:03.581 [2024-11-06 13:48:26.860785] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:03.581 [2024-11-06 13:48:26.860798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.860808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.860815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.860819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57690) 00:24:03.581 [2024-11-06 13:48:26.860826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.581 [2024-11-06 13:48:26.860838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9700, cid 4, qid 0 00:24:03.581 [2024-11-06 13:48:26.860991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.581 [2024-11-06 13:48:26.860998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.581 [2024-11-06 13:48:26.861001] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.861005] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57690): datao=0, datal=4096, cccid=4 00:24:03.581 [2024-11-06 13:48:26.861009] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9700) on tqpair(0xc57690): expected_datao=0, payload_size=4096 00:24:03.581 [2024-11-06 13:48:26.861014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.861028] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.861032] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.901923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.581 [2024-11-06 13:48:26.901932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.581 [2024-11-06 13:48:26.901935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.901939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9700) on tqpair=0xc57690 00:24:03.581 [2024-11-06 13:48:26.901952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.901962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.901969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.901973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57690) 00:24:03.581 [2024-11-06 13:48:26.901980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.581 [2024-11-06 13:48:26.901991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9700, cid 4, qid 0 00:24:03.581 [2024-11-06 13:48:26.902173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.581 [2024-11-06 13:48:26.902179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.581 [2024-11-06 13:48:26.902183] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.902187] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57690): datao=0, datal=4096, cccid=4 00:24:03.581 [2024-11-06 13:48:26.902191] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9700) on tqpair(0xc57690): expected_datao=0, payload_size=4096 00:24:03.581 [2024-11-06 13:48:26.902196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.902210] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.902217] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.942853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.581 [2024-11-06 13:48:26.942863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.581 [2024-11-06 13:48:26.942866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.942870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9700) on tqpair=0xc57690 00:24:03.581 [2024-11-06 13:48:26.942878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.942886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.942895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.942901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.942906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.942911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.942916] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:03.581 [2024-11-06 13:48:26.942921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:03.581 [2024-11-06 13:48:26.942926] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:03.581 [2024-11-06 13:48:26.942940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.942944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57690) 00:24:03.581 [2024-11-06 13:48:26.942951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.581 [2024-11-06 13:48:26.942958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.942961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.942965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57690) 00:24:03.581 [2024-11-06 13:48:26.942971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.581 [2024-11-06 13:48:26.942985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9700, cid 4, qid 0 00:24:03.581 [2024-11-06 13:48:26.942990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9880, cid 5, qid 0 00:24:03.581 [2024-11-06 13:48:26.943160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.581 [2024-11-06 13:48:26.943167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.581 [2024-11-06 13:48:26.943170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.943174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9700) on tqpair=0xc57690 00:24:03.581 [2024-11-06 13:48:26.943181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.581 [2024-11-06 13:48:26.943187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.581 [2024-11-06 13:48:26.943191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.943194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9880) on tqpair=0xc57690 00:24:03.581 [2024-11-06 13:48:26.943204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.943208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57690) 00:24:03.581 [2024-11-06 13:48:26.943216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.581 [2024-11-06 13:48:26.943227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9880, cid 5, qid 0 00:24:03.581 [2024-11-06 13:48:26.943376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.581 [2024-11-06 13:48:26.943383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.581 [2024-11-06 13:48:26.943386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.943390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9880) on tqpair=0xc57690 00:24:03.581 [2024-11-06 13:48:26.943399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.943403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57690) 00:24:03.581 [2024-11-06 13:48:26.943410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.581 [2024-11-06 13:48:26.943419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9880, cid 5, qid 0 00:24:03.581 [2024-11-06 13:48:26.943609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.581 [2024-11-06 13:48:26.943616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.581 [2024-11-06 13:48:26.943619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.943623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9880) on tqpair=0xc57690 00:24:03.581 [2024-11-06 13:48:26.943632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.581 [2024-11-06 13:48:26.943636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57690) 00:24:03.582 [2024-11-06 13:48:26.943642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.582 [2024-11-06 13:48:26.943652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9880, cid 5, qid 0 00:24:03.582 [2024-11-06 13:48:26.943864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.582 [2024-11-06 13:48:26.943871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.582 [2024-11-06 13:48:26.943875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.943879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9880) on tqpair=0xc57690 00:24:03.582 [2024-11-06 13:48:26.943893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.943897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc57690) 00:24:03.582 [2024-11-06 13:48:26.943904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.582 [2024-11-06 13:48:26.943911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.943915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc57690) 00:24:03.582 [2024-11-06 13:48:26.943921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.582 [2024-11-06 13:48:26.943928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.943932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc57690) 00:24:03.582 [2024-11-06 13:48:26.943938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.582 [2024-11-06 13:48:26.943946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.943949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc57690) 00:24:03.582 [2024-11-06 13:48:26.943957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.582 [2024-11-06 13:48:26.943969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9880, cid 5, qid 0 00:24:03.582 [2024-11-06 13:48:26.943974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9700, cid 4, qid 0 00:24:03.582 [2024-11-06 13:48:26.943979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9a00, cid 6, qid 0 00:24:03.582 [2024-11-06 13:48:26.943984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9b80, cid 7, qid 0 00:24:03.582 [2024-11-06 13:48:26.944187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.582 [2024-11-06 13:48:26.944193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.582 [2024-11-06 13:48:26.944197] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944200] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57690): datao=0, datal=8192, cccid=5 00:24:03.582 [2024-11-06 13:48:26.944205] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9880) on tqpair(0xc57690): expected_datao=0, payload_size=8192 00:24:03.582 [2024-11-06 13:48:26.944209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944285] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944289] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.582 [2024-11-06 13:48:26.944301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.582 [2024-11-06 13:48:26.944304] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944308] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57690): datao=0, datal=512, cccid=4 00:24:03.582 [2024-11-06 13:48:26.944313] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9700) on tqpair(0xc57690): expected_datao=0, payload_size=512 00:24:03.582 [2024-11-06 13:48:26.944317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944323] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944327] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.582 [2024-11-06 13:48:26.944338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.582 [2024-11-06 13:48:26.944342] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944345] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57690): datao=0, datal=512, cccid=6 00:24:03.582 [2024-11-06 13:48:26.944350] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9a00) on tqpair(0xc57690): expected_datao=0, payload_size=512 00:24:03.582 [2024-11-06 13:48:26.944354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944360] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944364] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.582 [2024-11-06 13:48:26.944375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.582 [2024-11-06 13:48:26.944379] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944382] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc57690): datao=0, datal=4096, cccid=7 00:24:03.582 [2024-11-06 13:48:26.944387] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9b80) on tqpair(0xc57690): expected_datao=0, payload_size=4096 00:24:03.582 [2024-11-06 13:48:26.944391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944398] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944401] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.582 [2024-11-06 13:48:26.944426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.582 [2024-11-06 13:48:26.944429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9880) on tqpair=0xc57690 00:24:03.582 [2024-11-06 13:48:26.944445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.582 [2024-11-06 13:48:26.944450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.582 [2024-11-06 13:48:26.944454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9700) on tqpair=0xc57690 00:24:03.582 [2024-11-06 13:48:26.944467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.582 [2024-11-06 13:48:26.944473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.582 [2024-11-06 13:48:26.944477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9a00) on tqpair=0xc57690 00:24:03.582 [2024-11-06 13:48:26.944488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.582 [2024-11-06 13:48:26.944494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.582 [2024-11-06 13:48:26.944497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.582 [2024-11-06 13:48:26.944501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9b80) on tqpair=0xc57690 00:24:03.582 ===================================================== 00:24:03.582 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.582 ===================================================== 00:24:03.582 Controller Capabilities/Features 00:24:03.582 ================================ 00:24:03.582 Vendor ID: 8086 00:24:03.582 Subsystem Vendor ID: 8086 00:24:03.582 Serial Number: SPDK00000000000001 00:24:03.582 Model Number: SPDK bdev Controller 00:24:03.582 Firmware Version: 25.01 00:24:03.582 Recommended Arb Burst: 6 00:24:03.582 IEEE OUI Identifier: e4 d2 5c 00:24:03.582 Multi-path I/O 00:24:03.582 May have multiple subsystem ports: Yes 00:24:03.582 May have multiple controllers: Yes 00:24:03.582 Associated with SR-IOV VF: No 00:24:03.582 Max Data Transfer Size: 131072 00:24:03.582 Max Number of Namespaces: 32 00:24:03.582 Max Number of I/O Queues: 127 00:24:03.582 NVMe Specification Version (VS): 1.3 00:24:03.582 NVMe Specification Version (Identify): 1.3 00:24:03.582 Maximum Queue Entries: 128 00:24:03.582 Contiguous Queues Required: Yes 00:24:03.582 Arbitration Mechanisms Supported 00:24:03.582 Weighted Round Robin: Not Supported 00:24:03.582 Vendor Specific: Not Supported 00:24:03.582 Reset Timeout: 15000 ms 00:24:03.582 Doorbell Stride: 4 bytes 00:24:03.582 NVM Subsystem Reset: Not Supported 00:24:03.582 Command Sets Supported 00:24:03.582 NVM Command Set: Supported 00:24:03.582 Boot Partition: Not Supported 00:24:03.582 Memory Page Size Minimum: 4096 bytes 00:24:03.582 Memory Page Size Maximum: 4096 bytes 00:24:03.582 Persistent Memory Region: Not Supported 00:24:03.582 Optional Asynchronous Events Supported 00:24:03.582 Namespace Attribute Notices: Supported 00:24:03.582 Firmware Activation Notices: Not Supported 00:24:03.582 ANA Change Notices: Not Supported 00:24:03.582 PLE Aggregate Log Change Notices: Not Supported 00:24:03.582 LBA Status Info Alert Notices: Not Supported 00:24:03.582 EGE Aggregate Log Change Notices: Not Supported 00:24:03.582 Normal NVM Subsystem Shutdown event: Not Supported 00:24:03.582 Zone Descriptor Change Notices: Not Supported 00:24:03.582 Discovery Log Change Notices: Not Supported 00:24:03.582 Controller Attributes 00:24:03.582 128-bit Host Identifier: Supported 00:24:03.582 Non-Operational Permissive Mode: Not Supported 00:24:03.582 NVM Sets: Not Supported 00:24:03.582 Read Recovery Levels: Not Supported 00:24:03.582 Endurance Groups: Not Supported 00:24:03.582 Predictable Latency Mode: Not Supported 00:24:03.582 Traffic Based Keep ALive: Not Supported 00:24:03.582 Namespace Granularity: Not Supported 00:24:03.582 SQ Associations: Not Supported 00:24:03.582 UUID List: Not Supported 00:24:03.582 Multi-Domain Subsystem: Not Supported 00:24:03.582 Fixed Capacity Management: Not Supported 00:24:03.582 Variable Capacity Management: Not Supported 00:24:03.582 Delete Endurance Group: Not Supported 00:24:03.582 Delete NVM Set: Not Supported 00:24:03.582 Extended LBA Formats Supported: Not Supported 00:24:03.582 Flexible Data Placement Supported: Not Supported 00:24:03.582 00:24:03.583 Controller Memory Buffer Support 00:24:03.583 ================================ 00:24:03.583 Supported: No 00:24:03.583 00:24:03.583 Persistent Memory Region Support 00:24:03.583 ================================ 00:24:03.583 Supported: No 00:24:03.583 00:24:03.583 Admin Command Set Attributes 00:24:03.583 ============================ 00:24:03.583 Security Send/Receive: Not Supported 00:24:03.583 Format NVM: Not Supported 00:24:03.583 Firmware Activate/Download: Not Supported 00:24:03.583 Namespace Management: Not Supported 00:24:03.583 Device Self-Test: Not Supported 00:24:03.583 Directives: Not Supported 00:24:03.583 NVMe-MI: Not Supported 00:24:03.583 Virtualization Management: Not Supported 00:24:03.583 Doorbell Buffer Config: Not Supported 00:24:03.583 Get LBA Status Capability: Not Supported 00:24:03.583 Command & Feature Lockdown Capability: Not Supported 00:24:03.583 Abort Command Limit: 4 00:24:03.583 Async Event Request Limit: 4 00:24:03.583 Number of Firmware Slots: N/A 00:24:03.583 Firmware Slot 1 Read-Only: N/A 00:24:03.583 Firmware Activation Without Reset: N/A 00:24:03.583 Multiple Update Detection Support: N/A 00:24:03.583 Firmware Update Granularity: No Information Provided 00:24:03.583 Per-Namespace SMART Log: No 00:24:03.583 Asymmetric Namespace Access Log Page: Not Supported 00:24:03.583 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:03.583 Command Effects Log Page: Supported 00:24:03.583 Get Log Page Extended Data: Supported 00:24:03.583 Telemetry Log Pages: Not Supported 00:24:03.583 Persistent Event Log Pages: Not Supported 00:24:03.583 Supported Log Pages Log Page: May Support 00:24:03.583 Commands Supported & Effects Log Page: Not Supported 00:24:03.583 Feature Identifiers & Effects Log Page:May Support 00:24:03.583 NVMe-MI Commands & Effects Log Page: May Support 00:24:03.583 Data Area 4 for Telemetry Log: Not Supported 00:24:03.583 Error Log Page Entries Supported: 128 00:24:03.583 Keep Alive: Supported 00:24:03.583 Keep Alive Granularity: 10000 ms 00:24:03.583 00:24:03.583 NVM Command Set Attributes 00:24:03.583 ========================== 00:24:03.583 Submission Queue Entry Size 00:24:03.583 Max: 64 00:24:03.583 Min: 64 00:24:03.583 Completion Queue Entry Size 00:24:03.583 Max: 16 00:24:03.583 Min: 16 00:24:03.583 Number of Namespaces: 32 00:24:03.583 Compare Command: Supported 00:24:03.583 Write Uncorrectable Command: Not Supported 00:24:03.583 Dataset Management Command: Supported 00:24:03.583 Write Zeroes Command: Supported 00:24:03.583 Set Features Save Field: Not Supported 00:24:03.583 Reservations: Supported 00:24:03.583 Timestamp: Not Supported 00:24:03.583 Copy: Supported 00:24:03.583 Volatile Write Cache: Present 00:24:03.583 Atomic Write Unit (Normal): 1 00:24:03.583 Atomic Write Unit (PFail): 1 00:24:03.583 Atomic Compare & Write Unit: 1 00:24:03.583 Fused Compare & Write: Supported 00:24:03.583 Scatter-Gather List 00:24:03.583 SGL Command Set: Supported 00:24:03.583 SGL Keyed: Supported 00:24:03.583 SGL Bit Bucket Descriptor: Not Supported 00:24:03.583 SGL Metadata Pointer: Not Supported 00:24:03.583 Oversized SGL: Not Supported 00:24:03.583 SGL Metadata Address: Not Supported 00:24:03.583 SGL Offset: Supported 00:24:03.583 Transport SGL Data Block: Not Supported 00:24:03.583 Replay Protected Memory Block: Not Supported 00:24:03.583 00:24:03.583 Firmware Slot Information 00:24:03.583 ========================= 00:24:03.583 Active slot: 1 00:24:03.583 Slot 1 Firmware Revision: 25.01 00:24:03.583 00:24:03.583 00:24:03.583 Commands Supported and Effects 00:24:03.583 ============================== 00:24:03.583 Admin Commands 00:24:03.583 -------------- 00:24:03.583 Get Log Page (02h): Supported 00:24:03.583 Identify (06h): Supported 00:24:03.583 Abort (08h): Supported 00:24:03.583 Set Features (09h): Supported 00:24:03.583 Get Features (0Ah): Supported 00:24:03.583 Asynchronous Event Request (0Ch): Supported 00:24:03.583 Keep Alive (18h): Supported 00:24:03.583 I/O Commands 00:24:03.583 ------------ 00:24:03.583 Flush (00h): Supported LBA-Change 00:24:03.583 Write (01h): Supported LBA-Change 00:24:03.583 Read (02h): Supported 00:24:03.583 Compare (05h): Supported 00:24:03.583 Write Zeroes (08h): Supported LBA-Change 00:24:03.583 Dataset Management (09h): Supported LBA-Change 00:24:03.583 Copy (19h): Supported LBA-Change 00:24:03.583 00:24:03.583 Error Log 00:24:03.583 ========= 00:24:03.583 00:24:03.583 Arbitration 00:24:03.583 =========== 00:24:03.583 Arbitration Burst: 1 00:24:03.583 00:24:03.583 Power Management 00:24:03.583 ================ 00:24:03.583 Number of Power States: 1 00:24:03.583 Current Power State: Power State #0 00:24:03.583 Power State #0: 00:24:03.583 Max Power: 0.00 W 00:24:03.583 Non-Operational State: Operational 00:24:03.583 Entry Latency: Not Reported 00:24:03.583 Exit Latency: Not Reported 00:24:03.583 Relative Read Throughput: 0 00:24:03.583 Relative Read Latency: 0 00:24:03.583 Relative Write Throughput: 0 00:24:03.583 Relative Write Latency: 0 00:24:03.583 Idle Power: Not Reported 00:24:03.583 Active Power: Not Reported 00:24:03.583 Non-Operational Permissive Mode: Not Supported 00:24:03.583 00:24:03.583 Health Information 00:24:03.583 ================== 00:24:03.583 Critical Warnings: 00:24:03.583 Available Spare Space: OK 00:24:03.583 Temperature: OK 00:24:03.583 Device Reliability: OK 00:24:03.583 Read Only: No 00:24:03.583 Volatile Memory Backup: OK 00:24:03.583 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:03.583 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:03.583 Available Spare: 0% 00:24:03.583 Available Spare Threshold: 0% 00:24:03.583 Life Percentage Used:[2024-11-06 13:48:26.944598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.583 [2024-11-06 13:48:26.944603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc57690) 00:24:03.583 [2024-11-06 13:48:26.944610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.583 [2024-11-06 13:48:26.944622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9b80, cid 7, qid 0 00:24:03.845 [2024-11-06 13:48:26.948754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.948763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.948767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.948771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9b80) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.948802] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:03.846 [2024-11-06 13:48:26.948811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9100) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.948818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.846 [2024-11-06 13:48:26.948823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9280) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.948828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.846 [2024-11-06 13:48:26.948833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9400) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.948838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.846 [2024-11-06 13:48:26.948843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.948847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.846 [2024-11-06 13:48:26.948855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.948859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.948862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.948872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.948884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.949055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.949062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.949065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.949076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.949090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.949104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.949280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.949286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.949290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.949299] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:03.846 [2024-11-06 13:48:26.949303] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:03.846 [2024-11-06 13:48:26.949312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.949327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.949337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.949556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.949563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.949566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.949580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.949594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.949604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.949809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.949816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.949819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.949833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.949844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.949851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.949862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.950060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.950066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.950070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.950083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.950097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.950107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.950258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.950265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.950268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.950281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.950296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.950306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.950513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.950519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.950523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.950536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.950550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.950560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.950765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.950772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.950775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.950789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.950805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.950816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.950967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.950974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.950977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.846 [2024-11-06 13:48:26.950990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.846 [2024-11-06 13:48:26.950998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.846 [2024-11-06 13:48:26.951004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.846 [2024-11-06 13:48:26.951014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.846 [2024-11-06 13:48:26.951175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.846 [2024-11-06 13:48:26.951181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.846 [2024-11-06 13:48:26.951185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.847 [2024-11-06 13:48:26.951198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.847 [2024-11-06 13:48:26.951212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.847 [2024-11-06 13:48:26.951222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.847 [2024-11-06 13:48:26.951420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.847 [2024-11-06 13:48:26.951427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.847 [2024-11-06 13:48:26.951430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.847 [2024-11-06 13:48:26.951443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.847 [2024-11-06 13:48:26.951457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.847 [2024-11-06 13:48:26.951468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.847 [2024-11-06 13:48:26.951673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.847 [2024-11-06 13:48:26.951679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.847 [2024-11-06 13:48:26.951683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.847 [2024-11-06 13:48:26.951696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.847 [2024-11-06 13:48:26.951712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.847 [2024-11-06 13:48:26.951722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.847 [2024-11-06 13:48:26.951873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.847 [2024-11-06 13:48:26.951880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.847 [2024-11-06 13:48:26.951884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.847 [2024-11-06 13:48:26.951897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.951905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.847 [2024-11-06 13:48:26.951911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.847 [2024-11-06 13:48:26.951922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.847 [2024-11-06 13:48:26.952089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.847 [2024-11-06 13:48:26.952095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.847 [2024-11-06 13:48:26.952098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.952102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.847 [2024-11-06 13:48:26.952112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.952116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.952119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.847 [2024-11-06 13:48:26.952126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.847 [2024-11-06 13:48:26.952136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.847 [2024-11-06 13:48:26.952329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.847 [2024-11-06 13:48:26.952335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.847 [2024-11-06 13:48:26.952338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.952342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.847 [2024-11-06 13:48:26.952352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.952355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.952359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.847 [2024-11-06 13:48:26.952366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.847 [2024-11-06 13:48:26.952376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.847 [2024-11-06 13:48:26.952530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.847 [2024-11-06 13:48:26.952537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.847 [2024-11-06 13:48:26.952540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.952544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.847 [2024-11-06 13:48:26.952553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.952557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.952561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.847 [2024-11-06 13:48:26.952568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.847 [2024-11-06 13:48:26.952579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.847 [2024-11-06 13:48:26.956754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.847 [2024-11-06 13:48:26.956762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.847 [2024-11-06 13:48:26.956766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.956770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.847 [2024-11-06 13:48:26.956779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.956783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.956787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc57690) 00:24:03.847 [2024-11-06 13:48:26.956794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.847 [2024-11-06 13:48:26.956805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9580, cid 3, qid 0 00:24:03.847 [2024-11-06 13:48:26.956977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.847 [2024-11-06 13:48:26.956984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.847 [2024-11-06 13:48:26.956987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.847 [2024-11-06 13:48:26.956991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9580) on tqpair=0xc57690 00:24:03.847 [2024-11-06 13:48:26.956999] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:24:03.847 0% 00:24:03.847 Data Units Read: 0 00:24:03.847 Data Units Written: 0 00:24:03.847 Host Read Commands: 0 00:24:03.847 Host Write Commands: 0 00:24:03.847 Controller Busy Time: 0 minutes 00:24:03.847 Power Cycles: 0 00:24:03.847 Power On Hours: 0 hours 00:24:03.847 Unsafe Shutdowns: 0 00:24:03.847 Unrecoverable Media Errors: 0 00:24:03.847 Lifetime Error Log Entries: 0 00:24:03.847 Warning Temperature Time: 0 minutes 00:24:03.847 Critical Temperature Time: 0 minutes 00:24:03.847 00:24:03.847 Number of Queues 00:24:03.847 ================ 00:24:03.847 Number of I/O Submission Queues: 127 00:24:03.847 Number of I/O Completion Queues: 127 00:24:03.847 00:24:03.847 Active Namespaces 00:24:03.847 ================= 00:24:03.847 Namespace ID:1 00:24:03.847 Error Recovery Timeout: Unlimited 00:24:03.847 Command Set Identifier: NVM (00h) 00:24:03.847 Deallocate: Supported 00:24:03.847 Deallocated/Unwritten Error: Not Supported 00:24:03.847 Deallocated Read Value: Unknown 00:24:03.847 Deallocate in Write Zeroes: Not Supported 00:24:03.847 Deallocated Guard Field: 0xFFFF 00:24:03.847 Flush: Supported 00:24:03.847 Reservation: Supported 00:24:03.847 Namespace Sharing Capabilities: Multiple Controllers 00:24:03.847 Size (in LBAs): 131072 (0GiB) 00:24:03.847 Capacity (in LBAs): 131072 (0GiB) 00:24:03.847 Utilization (in LBAs): 131072 (0GiB) 00:24:03.847 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:03.847 EUI64: ABCDEF0123456789 00:24:03.847 UUID: 35e82719-124f-4a96-aba1-f00c7b608000 00:24:03.847 Thin Provisioning: Not Supported 00:24:03.847 Per-NS Atomic Units: Yes 00:24:03.847 Atomic Boundary Size (Normal): 0 00:24:03.847 Atomic Boundary Size (PFail): 0 00:24:03.847 Atomic Boundary Offset: 0 00:24:03.847 Maximum Single Source Range Length: 65535 00:24:03.847 Maximum Copy Length: 65535 00:24:03.847 Maximum Source Range Count: 1 00:24:03.847 NGUID/EUI64 Never Reused: No 00:24:03.847 Namespace Write Protected: No 00:24:03.847 Number of LBA Formats: 1 00:24:03.847 Current LBA Format: LBA Format #00 00:24:03.847 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:03.847 00:24:03.847 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:03.847 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.847 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.847 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.847 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.847 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:03.847 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:03.847 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.847 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:03.848 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.848 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:03.848 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.848 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.848 rmmod nvme_tcp 00:24:03.848 rmmod nvme_fabrics 00:24:03.848 rmmod nvme_keyring 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 736444 ']' 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 736444 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 736444 ']' 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 736444 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 736444 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 736444' 00:24:03.848 killing process with pid 736444 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 736444 00:24:03.848 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 736444 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.109 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.019 13:48:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.019 00:24:06.019 real 0m11.457s 00:24:06.019 user 0m8.882s 00:24:06.019 sys 0m5.926s 00:24:06.019 13:48:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:06.019 13:48:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.019 ************************************ 00:24:06.020 END TEST nvmf_identify 00:24:06.020 ************************************ 00:24:06.020 13:48:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:06.020 13:48:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:06.020 13:48:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:06.020 13:48:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.281 ************************************ 00:24:06.281 START TEST nvmf_perf 00:24:06.281 ************************************ 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:06.281 * Looking for test storage... 00:24:06.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:06.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.281 --rc genhtml_branch_coverage=1 00:24:06.281 --rc genhtml_function_coverage=1 00:24:06.281 --rc genhtml_legend=1 00:24:06.281 --rc geninfo_all_blocks=1 00:24:06.281 --rc geninfo_unexecuted_blocks=1 00:24:06.281 00:24:06.281 ' 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:06.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.281 --rc genhtml_branch_coverage=1 00:24:06.281 --rc genhtml_function_coverage=1 00:24:06.281 --rc genhtml_legend=1 00:24:06.281 --rc geninfo_all_blocks=1 00:24:06.281 --rc geninfo_unexecuted_blocks=1 00:24:06.281 00:24:06.281 ' 00:24:06.281 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:06.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.281 --rc genhtml_branch_coverage=1 00:24:06.282 --rc genhtml_function_coverage=1 00:24:06.282 --rc genhtml_legend=1 00:24:06.282 --rc geninfo_all_blocks=1 00:24:06.282 --rc geninfo_unexecuted_blocks=1 00:24:06.282 00:24:06.282 ' 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:06.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.282 --rc genhtml_branch_coverage=1 00:24:06.282 --rc genhtml_function_coverage=1 00:24:06.282 --rc genhtml_legend=1 00:24:06.282 --rc geninfo_all_blocks=1 00:24:06.282 --rc geninfo_unexecuted_blocks=1 00:24:06.282 00:24:06.282 ' 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.282 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:14.425 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:14.425 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:14.425 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:14.425 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.425 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:24:14.426 00:24:14.426 --- 10.0.0.2 ping statistics --- 00:24:14.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.426 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:24:14.426 00:24:14.426 --- 10.0.0.1 ping statistics --- 00:24:14.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.426 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:24:14.426 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=740972 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 740972 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 740972 ']' 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:14.426 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.426 [2024-11-06 13:48:37.105830] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:24:14.426 [2024-11-06 13:48:37.105901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.426 [2024-11-06 13:48:37.189132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.426 [2024-11-06 13:48:37.230777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.426 [2024-11-06 13:48:37.230817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.426 [2024-11-06 13:48:37.230826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.426 [2024-11-06 13:48:37.230832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.426 [2024-11-06 13:48:37.230838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.426 [2024-11-06 13:48:37.232441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.426 [2024-11-06 13:48:37.232556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.426 [2024-11-06 13:48:37.232713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.426 [2024-11-06 13:48:37.232714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.686 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:14.686 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:14.686 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.686 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.686 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.686 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.686 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:14.686 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:15.258 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:15.258 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:15.518 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:15.518 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:15.518 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:15.518 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:15.518 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:15.518 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:15.518 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:15.779 [2024-11-06 13:48:39.002728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.779 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.039 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.039 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.039 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.039 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:16.299 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.559 [2024-11-06 13:48:39.745424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.559 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:16.818 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:16.818 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:16.818 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:16.818 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:18.202 Initializing NVMe Controllers 00:24:18.202 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:18.202 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:18.202 Initialization complete. Launching workers. 00:24:18.202 ======================================================== 00:24:18.202 Latency(us) 00:24:18.202 Device Information : IOPS MiB/s Average min max 00:24:18.202 PCIE (0000:65:00.0) NSID 1 from core 0: 79845.83 311.90 400.05 13.26 4912.20 00:24:18.202 ======================================================== 00:24:18.202 Total : 79845.83 311.90 400.05 13.26 4912.20 00:24:18.202 00:24:18.202 13:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:19.586 Initializing NVMe Controllers 00:24:19.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:19.586 Initialization complete. Launching workers. 00:24:19.586 ======================================================== 00:24:19.586 Latency(us) 00:24:19.586 Device Information : IOPS MiB/s Average min max 00:24:19.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 98.00 0.38 10528.73 105.45 45605.02 00:24:19.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18651.90 7970.81 47890.55 00:24:19.586 ======================================================== 00:24:19.586 Total : 154.00 0.60 13482.61 105.45 47890.55 00:24:19.586 00:24:19.586 13:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.526 Initializing NVMe Controllers 00:24:20.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:20.526 Initialization complete. Launching workers. 00:24:20.526 ======================================================== 00:24:20.526 Latency(us) 00:24:20.526 Device Information : IOPS MiB/s Average min max 00:24:20.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10341.50 40.40 3122.73 503.58 41821.23 00:24:20.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3866.32 15.10 8334.96 5734.92 16155.56 00:24:20.526 ======================================================== 00:24:20.526 Total : 14207.82 55.50 4541.11 503.58 41821.23 00:24:20.526 00:24:20.526 13:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:20.526 13:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:20.526 13:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.070 Initializing NVMe Controllers 00:24:23.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.070 Controller IO queue size 128, less than required. 00:24:23.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.070 Controller IO queue size 128, less than required. 00:24:23.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:23.070 Initialization complete. Launching workers. 00:24:23.070 ======================================================== 00:24:23.070 Latency(us) 00:24:23.070 Device Information : IOPS MiB/s Average min max 00:24:23.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1684.77 421.19 77317.02 50681.76 129228.23 00:24:23.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 582.75 145.69 231351.78 62664.70 378821.27 00:24:23.070 ======================================================== 00:24:23.070 Total : 2267.52 566.88 116903.65 50681.76 378821.27 00:24:23.070 00:24:23.070 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:23.331 No valid NVMe controllers or AIO or URING devices found 00:24:23.331 Initializing NVMe Controllers 00:24:23.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.331 Controller IO queue size 128, less than required. 00:24:23.331 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.331 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:23.331 Controller IO queue size 128, less than required. 00:24:23.331 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.331 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:23.331 WARNING: Some requested NVMe devices were skipped 00:24:23.331 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:25.873 Initializing NVMe Controllers 00:24:25.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.873 Controller IO queue size 128, less than required. 00:24:25.873 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:25.873 Controller IO queue size 128, less than required. 00:24:25.873 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:25.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:25.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:25.873 Initialization complete. Launching workers. 00:24:25.873 00:24:25.873 ==================== 00:24:25.873 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:25.873 TCP transport: 00:24:25.873 polls: 21496 00:24:25.873 idle_polls: 11900 00:24:25.873 sock_completions: 9596 00:24:25.873 nvme_completions: 7005 00:24:25.873 submitted_requests: 10664 00:24:25.873 queued_requests: 1 00:24:25.873 00:24:25.873 ==================== 00:24:25.873 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:25.873 TCP transport: 00:24:25.873 polls: 21662 00:24:25.873 idle_polls: 11375 00:24:25.873 sock_completions: 10287 00:24:25.873 nvme_completions: 6135 00:24:25.873 submitted_requests: 9162 00:24:25.873 queued_requests: 1 00:24:25.873 ======================================================== 00:24:25.873 Latency(us) 00:24:25.873 Device Information : IOPS MiB/s Average min max 00:24:25.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1750.93 437.73 74610.90 39454.23 137789.52 00:24:25.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1533.44 383.36 84184.90 39566.26 145168.46 00:24:25.873 ======================================================== 00:24:25.873 Total : 3284.38 821.09 79080.90 39454.23 145168.46 00:24:25.873 00:24:25.873 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:25.873 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.134 rmmod nvme_tcp 00:24:26.134 rmmod nvme_fabrics 00:24:26.134 rmmod nvme_keyring 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 740972 ']' 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 740972 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 740972 ']' 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 740972 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 740972 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 740972' 00:24:26.134 killing process with pid 740972 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 740972 00:24:26.134 13:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 740972 00:24:28.059 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.059 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.059 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.059 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:28.059 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:28.059 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.059 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.320 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.320 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.320 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.320 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.320 13:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.234 13:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.234 00:24:30.234 real 0m24.105s 00:24:30.234 user 0m58.270s 00:24:30.234 sys 0m8.328s 00:24:30.234 13:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:30.234 13:48:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.234 ************************************ 00:24:30.234 END TEST nvmf_perf 00:24:30.234 ************************************ 00:24:30.234 13:48:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:30.234 13:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:30.234 13:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:30.234 13:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.234 ************************************ 00:24:30.234 START TEST nvmf_fio_host 00:24:30.234 ************************************ 00:24:30.234 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:30.497 * Looking for test storage... 00:24:30.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:30.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.497 --rc genhtml_branch_coverage=1 00:24:30.497 --rc genhtml_function_coverage=1 00:24:30.497 --rc genhtml_legend=1 00:24:30.497 --rc geninfo_all_blocks=1 00:24:30.497 --rc geninfo_unexecuted_blocks=1 00:24:30.497 00:24:30.497 ' 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:30.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.497 --rc genhtml_branch_coverage=1 00:24:30.497 --rc genhtml_function_coverage=1 00:24:30.497 --rc genhtml_legend=1 00:24:30.497 --rc geninfo_all_blocks=1 00:24:30.497 --rc geninfo_unexecuted_blocks=1 00:24:30.497 00:24:30.497 ' 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:30.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.497 --rc genhtml_branch_coverage=1 00:24:30.497 --rc genhtml_function_coverage=1 00:24:30.497 --rc genhtml_legend=1 00:24:30.497 --rc geninfo_all_blocks=1 00:24:30.497 --rc geninfo_unexecuted_blocks=1 00:24:30.497 00:24:30.497 ' 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:30.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.497 --rc genhtml_branch_coverage=1 00:24:30.497 --rc genhtml_function_coverage=1 00:24:30.497 --rc genhtml_legend=1 00:24:30.497 --rc geninfo_all_blocks=1 00:24:30.497 --rc geninfo_unexecuted_blocks=1 00:24:30.497 00:24:30.497 ' 00:24:30.497 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.498 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:38.772 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:38.772 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:38.772 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:38.772 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.772 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:24:38.773 00:24:38.773 --- 10.0.0.2 ping statistics --- 00:24:38.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.773 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:24:38.773 00:24:38.773 --- 10.0.0.1 ping statistics --- 00:24:38.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.773 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=748023 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 748023 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 748023 ']' 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:38.773 13:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.773 [2024-11-06 13:49:01.492517] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:24:38.773 [2024-11-06 13:49:01.492588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.773 [2024-11-06 13:49:01.578441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.773 [2024-11-06 13:49:01.620077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.773 [2024-11-06 13:49:01.620115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.773 [2024-11-06 13:49:01.620124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.773 [2024-11-06 13:49:01.620130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.773 [2024-11-06 13:49:01.620136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.773 [2024-11-06 13:49:01.622014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.773 [2024-11-06 13:49:01.622115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.773 [2024-11-06 13:49:01.622274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.773 [2024-11-06 13:49:01.622274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.033 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:39.033 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:24:39.033 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:39.293 [2024-11-06 13:49:02.453668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.293 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:39.293 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.293 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.293 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:39.553 Malloc1 00:24:39.553 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.553 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:39.814 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.074 [2024-11-06 13:49:03.237454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.074 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:40.340 13:49:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.609 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:40.609 fio-3.35 00:24:40.609 Starting 1 thread 00:24:43.147 00:24:43.147 test: (groupid=0, jobs=1): err= 0: pid=748720: Wed Nov 6 13:49:06 2024 00:24:43.147 read: IOPS=9588, BW=37.5MiB/s (39.3MB/s)(75.1MiB/2006msec) 00:24:43.147 slat (usec): min=2, max=279, avg= 2.19, stdev= 2.82 00:24:43.147 clat (usec): min=3728, max=13100, avg=7378.26, stdev=560.73 00:24:43.147 lat (usec): min=3761, max=13102, avg=7380.46, stdev=560.64 00:24:43.147 clat percentiles (usec): 00:24:43.147 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 6980], 00:24:43.147 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:24:43.147 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:24:43.147 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[10814], 99.95th=[12387], 00:24:43.147 | 99.99th=[13042] 00:24:43.147 bw ( KiB/s): min=37648, max=38760, per=99.89%, avg=38310.00, stdev=495.63, samples=4 00:24:43.147 iops : min= 9412, max= 9690, avg=9577.50, stdev=123.91, samples=4 00:24:43.147 write: IOPS=9592, BW=37.5MiB/s (39.3MB/s)(75.2MiB/2006msec); 0 zone resets 00:24:43.147 slat (usec): min=2, max=272, avg= 2.26, stdev= 2.17 00:24:43.147 clat (usec): min=2882, max=11581, avg=5931.32, stdev=455.57 00:24:43.147 lat (usec): min=2900, max=11583, avg=5933.58, stdev=455.54 00:24:43.147 clat percentiles (usec): 00:24:43.147 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:24:43.147 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:24:43.147 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:24:43.147 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 8717], 99.95th=[ 9765], 00:24:43.147 | 99.99th=[10814] 00:24:43.147 bw ( KiB/s): min=38144, max=38720, per=100.00%, avg=38388.00, stdev=244.88, samples=4 00:24:43.147 iops : min= 9536, max= 9680, avg=9597.00, stdev=61.22, samples=4 00:24:43.147 lat (msec) : 4=0.06%, 10=99.85%, 20=0.09% 00:24:43.147 cpu : usr=71.62%, sys=26.93%, ctx=105, majf=0, minf=17 00:24:43.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:43.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:43.147 issued rwts: total=19234,19243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:43.147 00:24:43.147 Run status group 0 (all jobs): 00:24:43.147 READ: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.1MiB (78.8MB), run=2006-2006msec 00:24:43.147 WRITE: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.2MiB (78.8MB), run=2006-2006msec 00:24:43.147 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:43.148 13:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.409 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:43.409 fio-3.35 00:24:43.409 Starting 1 thread 00:24:45.953 00:24:45.953 test: (groupid=0, jobs=1): err= 0: pid=749453: Wed Nov 6 13:49:09 2024 00:24:45.953 read: IOPS=9249, BW=145MiB/s (152MB/s)(290MiB/2007msec) 00:24:45.953 slat (usec): min=3, max=110, avg= 3.62, stdev= 1.62 00:24:45.953 clat (usec): min=1326, max=15666, avg=8419.72, stdev=2018.98 00:24:45.953 lat (usec): min=1330, max=15669, avg=8423.34, stdev=2019.07 00:24:45.953 clat percentiles (usec): 00:24:45.953 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6587], 00:24:45.953 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8356], 60.00th=[ 8848], 00:24:45.953 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:24:45.953 | 99.00th=[13304], 99.50th=[13829], 99.90th=[15139], 99.95th=[15533], 00:24:45.953 | 99.99th=[15664] 00:24:45.953 bw ( KiB/s): min=67200, max=82016, per=49.30%, avg=72968.00, stdev=6504.82, samples=4 00:24:45.953 iops : min= 4200, max= 5126, avg=4560.50, stdev=406.55, samples=4 00:24:45.953 write: IOPS=5506, BW=86.0MiB/s (90.2MB/s)(149MiB/1731msec); 0 zone resets 00:24:45.953 slat (usec): min=39, max=299, avg=40.88, stdev= 7.04 00:24:45.953 clat (usec): min=2391, max=15523, avg=9450.10, stdev=1566.94 00:24:45.953 lat (usec): min=2431, max=15563, avg=9490.98, stdev=1568.10 00:24:45.953 clat percentiles (usec): 00:24:45.953 | 1.00th=[ 6063], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8160], 00:24:45.953 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:24:45.953 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11469], 95.00th=[12387], 00:24:45.953 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14484], 99.95th=[15008], 00:24:45.953 | 99.99th=[15533] 00:24:45.953 bw ( KiB/s): min=69696, max=85056, per=85.99%, avg=75760.00, stdev=6638.21, samples=4 00:24:45.953 iops : min= 4356, max= 5316, avg=4735.00, stdev=414.89, samples=4 00:24:45.953 lat (msec) : 2=0.01%, 4=0.51%, 10=72.50%, 20=26.98% 00:24:45.953 cpu : usr=83.15%, sys=15.15%, ctx=20, majf=0, minf=31 00:24:45.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:45.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.953 issued rwts: total=18564,9532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.953 00:24:45.953 Run status group 0 (all jobs): 00:24:45.953 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=290MiB (304MB), run=2007-2007msec 00:24:45.953 WRITE: bw=86.0MiB/s (90.2MB/s), 86.0MiB/s-86.0MiB/s (90.2MB/s-90.2MB/s), io=149MiB (156MB), run=1731-1731msec 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.953 rmmod nvme_tcp 00:24:45.953 rmmod nvme_fabrics 00:24:45.953 rmmod nvme_keyring 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 748023 ']' 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 748023 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 748023 ']' 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 748023 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:45.953 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 748023 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 748023' 00:24:46.214 killing process with pid 748023 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 748023 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 748023 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.214 13:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.765 00:24:48.765 real 0m17.996s 00:24:48.765 user 1m9.424s 00:24:48.765 sys 0m7.752s 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.765 ************************************ 00:24:48.765 END TEST nvmf_fio_host 00:24:48.765 ************************************ 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.765 ************************************ 00:24:48.765 START TEST nvmf_failover 00:24:48.765 ************************************ 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:48.765 * Looking for test storage... 00:24:48.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:48.765 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:48.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.766 --rc genhtml_branch_coverage=1 00:24:48.766 --rc genhtml_function_coverage=1 00:24:48.766 --rc genhtml_legend=1 00:24:48.766 --rc geninfo_all_blocks=1 00:24:48.766 --rc geninfo_unexecuted_blocks=1 00:24:48.766 00:24:48.766 ' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:48.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.766 --rc genhtml_branch_coverage=1 00:24:48.766 --rc genhtml_function_coverage=1 00:24:48.766 --rc genhtml_legend=1 00:24:48.766 --rc geninfo_all_blocks=1 00:24:48.766 --rc geninfo_unexecuted_blocks=1 00:24:48.766 00:24:48.766 ' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:48.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.766 --rc genhtml_branch_coverage=1 00:24:48.766 --rc genhtml_function_coverage=1 00:24:48.766 --rc genhtml_legend=1 00:24:48.766 --rc geninfo_all_blocks=1 00:24:48.766 --rc geninfo_unexecuted_blocks=1 00:24:48.766 00:24:48.766 ' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:48.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.766 --rc genhtml_branch_coverage=1 00:24:48.766 --rc genhtml_function_coverage=1 00:24:48.766 --rc genhtml_legend=1 00:24:48.766 --rc geninfo_all_blocks=1 00:24:48.766 --rc geninfo_unexecuted_blocks=1 00:24:48.766 00:24:48.766 ' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.766 13:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:56.913 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:56.913 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:56.913 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:56.913 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.913 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.914 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:24:56.914 00:24:56.914 --- 10.0.0.2 ping statistics --- 00:24:56.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.914 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:24:56.914 00:24:56.914 --- 10.0.0.1 ping statistics --- 00:24:56.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.914 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=753998 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 753998 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 753998 ']' 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:56.914 13:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.914 [2024-11-06 13:49:19.279083] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:24:56.914 [2024-11-06 13:49:19.279153] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.914 [2024-11-06 13:49:19.379213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:56.914 [2024-11-06 13:49:19.432868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.914 [2024-11-06 13:49:19.432921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.914 [2024-11-06 13:49:19.432930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.914 [2024-11-06 13:49:19.432937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.914 [2024-11-06 13:49:19.432943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.914 [2024-11-06 13:49:19.434763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.914 [2024-11-06 13:49:19.434952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:56.914 [2024-11-06 13:49:19.435053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.914 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:56.914 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:56.914 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:56.914 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:56.914 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.914 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.914 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:56.914 [2024-11-06 13:49:20.283286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.175 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:57.175 Malloc0 00:24:57.175 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.436 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:57.697 13:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.697 [2024-11-06 13:49:21.041078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.957 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:57.957 [2024-11-06 13:49:21.225574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:57.957 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:58.217 [2024-11-06 13:49:21.410163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=754563 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 754563 /var/tmp/bdevperf.sock 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 754563 ']' 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:58.217 13:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.158 13:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:59.158 13:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:59.158 13:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:59.158 NVMe0n1 00:24:59.418 13:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:59.678 00:24:59.678 13:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=754800 00:24:59.678 13:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.678 13:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:00.619 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.619 [2024-11-06 13:49:23.988039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.619 [2024-11-06 13:49:23.988081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.619 [2024-11-06 13:49:23.988087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.619 [2024-11-06 13:49:23.988092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.619 [2024-11-06 13:49:23.988097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.619 [2024-11-06 13:49:23.988101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.620 [2024-11-06 13:49:23.988485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.621 [2024-11-06 13:49:23.988602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23be4e0 is same with the state(6) to be set 00:25:00.881 13:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:04.183 13:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:04.183 00:25:04.183 13:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:04.183 [2024-11-06 13:49:27.503048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf030 is same with the state(6) to be set 00:25:04.183 [2024-11-06 13:49:27.503082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf030 is same with the state(6) to be set 00:25:04.183 [2024-11-06 13:49:27.503089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf030 is same with the state(6) to be set 00:25:04.183 [2024-11-06 13:49:27.503094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf030 is same with the state(6) to be set 00:25:04.183 [2024-11-06 13:49:27.503099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf030 is same with the state(6) to be set 00:25:04.183 [2024-11-06 13:49:27.503103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf030 is same with the state(6) to be set 00:25:04.183 [2024-11-06 13:49:27.503108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf030 is same with the state(6) to be set 00:25:04.183 13:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:07.483 13:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.483 [2024-11-06 13:49:30.696337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.483 13:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:08.613 13:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:08.613 [2024-11-06 13:49:31.886455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.613 [2024-11-06 13:49:31.886578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 [2024-11-06 13:49:31.886754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22844e0 is same with the state(6) to be set 00:25:08.614 13:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 754800 00:25:15.215 { 00:25:15.215 "results": [ 00:25:15.215 { 00:25:15.215 "job": "NVMe0n1", 00:25:15.215 "core_mask": "0x1", 00:25:15.215 "workload": "verify", 00:25:15.215 "status": "finished", 00:25:15.215 "verify_range": { 00:25:15.215 "start": 0, 00:25:15.215 "length": 16384 00:25:15.215 }, 00:25:15.215 "queue_depth": 128, 00:25:15.215 "io_size": 4096, 00:25:15.215 "runtime": 15.004713, 00:25:15.215 "iops": 11292.985077422007, 00:25:15.215 "mibps": 44.113222958679714, 00:25:15.215 "io_failed": 9157, 00:25:15.215 "io_timeout": 0, 00:25:15.215 "avg_latency_us": 10724.826470628856, 00:25:15.215 "min_latency_us": 781.6533333333333, 00:25:15.215 "max_latency_us": 23483.733333333334 00:25:15.215 } 00:25:15.215 ], 00:25:15.215 "core_count": 1 00:25:15.215 } 00:25:15.215 13:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 754563 00:25:15.215 13:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 754563 ']' 00:25:15.215 13:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 754563 00:25:15.215 13:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:15.215 13:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:15.215 13:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 754563 00:25:15.215 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:15.215 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:15.215 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 754563' 00:25:15.215 killing process with pid 754563 00:25:15.215 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 754563 00:25:15.215 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 754563 00:25:15.215 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:15.215 [2024-11-06 13:49:21.492613] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:25:15.215 [2024-11-06 13:49:21.492673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754563 ] 00:25:15.215 [2024-11-06 13:49:21.563310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.215 [2024-11-06 13:49:21.599659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.215 Running I/O for 15 seconds... 00:25:15.215 11523.00 IOPS, 45.01 MiB/s [2024-11-06T12:49:38.591Z] [2024-11-06 13:49:23.990188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.215 [2024-11-06 13:49:23.990222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.215 [2024-11-06 13:49:23.990237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.215 [2024-11-06 13:49:23.990246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.216 [2024-11-06 13:49:23.990886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.216 [2024-11-06 13:49:23.990893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.990904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.990911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.990921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.990928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.990938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.990945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.990955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.990962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.990972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.990979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.990988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.990996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.217 [2024-11-06 13:49:23.991510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.217 [2024-11-06 13:49:23.991693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.217 [2024-11-06 13:49:23.991700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.991984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.991992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.992009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.992026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.992042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.992060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.992077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.992093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.992110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.992127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.218 [2024-11-06 13:49:23.992145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100840 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100848 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100856 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100864 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100872 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100880 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100888 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100896 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100904 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100912 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100920 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100928 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100936 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:23.992517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:23.992522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:23.992528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100944 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:23.992535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:24.005558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:24.005586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:24.005597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100952 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:24.005607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.218 [2024-11-06 13:49:24.005615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.218 [2024-11-06 13:49:24.005626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.218 [2024-11-06 13:49:24.005633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100960 len:8 PRP1 0x0 PRP2 0x0 00:25:15.218 [2024-11-06 13:49:24.005640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:24.005688] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:15.219 [2024-11-06 13:49:24.005723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.219 [2024-11-06 13:49:24.005737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:24.005774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.219 [2024-11-06 13:49:24.005785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:24.005796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.219 [2024-11-06 13:49:24.005806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:24.005817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.219 [2024-11-06 13:49:24.005828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:24.005835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:15.219 [2024-11-06 13:49:24.005882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4d70 (9): Bad file descriptor 00:25:15.219 [2024-11-06 13:49:24.009407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:15.219 [2024-11-06 13:49:24.031588] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:15.219 11344.00 IOPS, 44.31 MiB/s [2024-11-06T12:49:38.595Z] 11390.33 IOPS, 44.49 MiB/s [2024-11-06T12:49:38.595Z] 11361.25 IOPS, 44.38 MiB/s [2024-11-06T12:49:38.595Z] [2024-11-06 13:49:27.506701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.219 [2024-11-06 13:49:27.506741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.219 [2024-11-06 13:49:27.506771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.219 [2024-11-06 13:49:27.506789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.219 [2024-11-06 13:49:27.506817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.219 [2024-11-06 13:49:27.506835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.219 [2024-11-06 13:49:27.506859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.219 [2024-11-06 13:49:27.506877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.219 [2024-11-06 13:49:27.506894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.506915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.506932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.506949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.506966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.506987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.506997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.219 [2024-11-06 13:49:27.507354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.219 [2024-11-06 13:49:27.507364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.507984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.507994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.220 [2024-11-06 13:49:27.508271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.220 [2024-11-06 13:49:27.508278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.221 [2024-11-06 13:49:27.508294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.221 [2024-11-06 13:49:27.508311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34152 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34160 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34168 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34176 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34184 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34192 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34200 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34208 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34216 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34224 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34232 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34248 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34256 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34264 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34272 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34280 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34288 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34296 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34304 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34312 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34320 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.508977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34328 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.508985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.508993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.508999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.509006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34336 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.509016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.509024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.509030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.509036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34344 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.509043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.509051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.509056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.221 [2024-11-06 13:49:27.509064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34352 len:8 PRP1 0x0 PRP2 0x0 00:25:15.221 [2024-11-06 13:49:27.509072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.221 [2024-11-06 13:49:27.509080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.221 [2024-11-06 13:49:27.509086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.509092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34360 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.509101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.509109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.509114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.509121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34368 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.509128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.509136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.509142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.509148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34376 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.509155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.509163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.509168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.509175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34384 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.509183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.509190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.509196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.509202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34392 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.509210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.509217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.509223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.509230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34400 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.509237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.509245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.509250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.509256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34408 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.509264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.509276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.509283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.509289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34416 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.509296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.509304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.509309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.519993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34424 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.520023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.520038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.520044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.520052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34432 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.520059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.520067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.520075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.520081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34440 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.520088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.520096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.222 [2024-11-06 13:49:27.520102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.222 [2024-11-06 13:49:27.520109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34448 len:8 PRP1 0x0 PRP2 0x0 00:25:15.222 [2024-11-06 13:49:27.520117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.520157] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:15.222 [2024-11-06 13:49:27.520186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.222 [2024-11-06 13:49:27.520196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.520205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.222 [2024-11-06 13:49:27.520213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.520222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.222 [2024-11-06 13:49:27.520229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.520238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.222 [2024-11-06 13:49:27.520246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:27.520259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:15.222 [2024-11-06 13:49:27.520289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4d70 (9): Bad file descriptor 00:25:15.222 [2024-11-06 13:49:27.523793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:15.222 [2024-11-06 13:49:27.708090] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:15.222 10941.60 IOPS, 42.74 MiB/s [2024-11-06T12:49:38.598Z] 11006.67 IOPS, 42.99 MiB/s [2024-11-06T12:49:38.598Z] 11109.14 IOPS, 43.40 MiB/s [2024-11-06T12:49:38.598Z] 11192.88 IOPS, 43.72 MiB/s [2024-11-06T12:49:38.598Z] [2024-11-06 13:49:31.886132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.222 [2024-11-06 13:49:31.886179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.886190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.222 [2024-11-06 13:49:31.886198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.886207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.222 [2024-11-06 13:49:31.886215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.886223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.222 [2024-11-06 13:49:31.886230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.886237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd4d70 is same with the state(6) to be set 00:25:15.222 [2024-11-06 13:49:31.888186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.222 [2024-11-06 13:49:31.888508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.222 [2024-11-06 13:49:31.888518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.888905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.888983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.888991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.889008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.889025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.889041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.889058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.889075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.889091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.889108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.889125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.223 [2024-11-06 13:49:31.889141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.889158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.889178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.889194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.889211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.889227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.889244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.223 [2024-11-06 13:49:31.889254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.223 [2024-11-06 13:49:31.889260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.224 [2024-11-06 13:49:31.889860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.889990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.889998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.890007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.890014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.890023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.890031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.890042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.890049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.890058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.890065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.890074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.890082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.890091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.890099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.890108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.224 [2024-11-06 13:49:31.890115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.224 [2024-11-06 13:49:31.890124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.225 [2024-11-06 13:49:31.890364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.225 [2024-11-06 13:49:31.890389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.225 [2024-11-06 13:49:31.890395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83736 len:8 PRP1 0x0 PRP2 0x0 00:25:15.225 [2024-11-06 13:49:31.890403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.225 [2024-11-06 13:49:31.890443] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:15.225 [2024-11-06 13:49:31.890453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:15.225 [2024-11-06 13:49:31.893963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:15.225 [2024-11-06 13:49:31.893987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4d70 (9): Bad file descriptor 00:25:15.225 [2024-11-06 13:49:31.916349] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:15.225 11177.89 IOPS, 43.66 MiB/s [2024-11-06T12:49:38.601Z] 11231.00 IOPS, 43.87 MiB/s [2024-11-06T12:49:38.601Z] 11290.55 IOPS, 44.10 MiB/s [2024-11-06T12:49:38.601Z] 11286.42 IOPS, 44.09 MiB/s [2024-11-06T12:49:38.601Z] 11284.23 IOPS, 44.08 MiB/s [2024-11-06T12:49:38.601Z] 11285.00 IOPS, 44.08 MiB/s 00:25:15.225 Latency(us) 00:25:15.225 [2024-11-06T12:49:38.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.225 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:15.225 Verification LBA range: start 0x0 length 0x4000 00:25:15.225 NVMe0n1 : 15.00 11292.99 44.11 610.27 0.00 10724.83 781.65 23483.73 00:25:15.225 [2024-11-06T12:49:38.601Z] =================================================================================================================== 00:25:15.225 [2024-11-06T12:49:38.601Z] Total : 11292.99 44.11 610.27 0.00 10724.83 781.65 23483.73 00:25:15.225 Received shutdown signal, test time was about 15.000000 seconds 00:25:15.225 00:25:15.225 Latency(us) 00:25:15.225 [2024-11-06T12:49:38.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.225 [2024-11-06T12:49:38.601Z] =================================================================================================================== 00:25:15.225 [2024-11-06T12:49:38.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=757704 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 757704 /var/tmp/bdevperf.sock 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 757704 ']' 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.225 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:15.795 13:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:15.795 13:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:15.795 13:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:15.795 [2024-11-06 13:49:39.169528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.055 13:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.055 [2024-11-06 13:49:39.345974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.055 13:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:16.314 NVMe0n1 00:25:16.314 13:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:16.575 00:25:16.575 13:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.147 00:25:17.147 13:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.147 13:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:17.147 13:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.408 13:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:20.709 13:49:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:20.709 13:49:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:20.709 13:49:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=758938 00:25:20.709 13:49:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:20.709 13:49:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 758938 00:25:21.655 { 00:25:21.655 "results": [ 00:25:21.655 { 00:25:21.655 "job": "NVMe0n1", 00:25:21.655 "core_mask": "0x1", 00:25:21.655 "workload": "verify", 00:25:21.655 "status": "finished", 00:25:21.655 "verify_range": { 00:25:21.655 "start": 0, 00:25:21.655 "length": 16384 00:25:21.655 }, 00:25:21.655 "queue_depth": 128, 00:25:21.655 "io_size": 4096, 00:25:21.655 "runtime": 1.012133, 00:25:21.655 "iops": 11204.061126353947, 00:25:21.655 "mibps": 43.765863774820104, 00:25:21.655 "io_failed": 0, 00:25:21.655 "io_timeout": 0, 00:25:21.655 "avg_latency_us": 11372.300792475015, 00:25:21.655 "min_latency_us": 2757.9733333333334, 00:25:21.655 "max_latency_us": 9939.626666666667 00:25:21.655 } 00:25:21.655 ], 00:25:21.655 "core_count": 1 00:25:21.655 } 00:25:21.655 13:49:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:21.655 [2024-11-06 13:49:38.218476] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:25:21.655 [2024-11-06 13:49:38.218538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757704 ] 00:25:21.655 [2024-11-06 13:49:38.289689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.655 [2024-11-06 13:49:38.325101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.655 [2024-11-06 13:49:40.647957] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:21.655 [2024-11-06 13:49:40.648009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.655 [2024-11-06 13:49:40.648021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.655 [2024-11-06 13:49:40.648031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.655 [2024-11-06 13:49:40.648039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.655 [2024-11-06 13:49:40.648048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.655 [2024-11-06 13:49:40.648055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.655 [2024-11-06 13:49:40.648063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.655 [2024-11-06 13:49:40.648071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.655 [2024-11-06 13:49:40.648078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:21.655 [2024-11-06 13:49:40.648104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:21.655 [2024-11-06 13:49:40.648119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa6d70 (9): Bad file descriptor 00:25:21.655 [2024-11-06 13:49:40.662032] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:21.655 Running I/O for 1 seconds... 00:25:21.655 11212.00 IOPS, 43.80 MiB/s 00:25:21.655 Latency(us) 00:25:21.655 [2024-11-06T12:49:45.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.655 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:21.655 Verification LBA range: start 0x0 length 0x4000 00:25:21.655 NVMe0n1 : 1.01 11204.06 43.77 0.00 0.00 11372.30 2757.97 9939.63 00:25:21.655 [2024-11-06T12:49:45.031Z] =================================================================================================================== 00:25:21.655 [2024-11-06T12:49:45.031Z] Total : 11204.06 43.77 0.00 0.00 11372.30 2757.97 9939.63 00:25:21.655 13:49:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.655 13:49:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:21.916 13:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.178 13:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.178 13:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:22.439 13:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.439 13:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 757704 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 757704 ']' 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 757704 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 757704 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 757704' 00:25:25.738 killing process with pid 757704 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 757704 00:25:25.738 13:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 757704 00:25:25.738 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:25.738 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:25.999 rmmod nvme_tcp 00:25:25.999 rmmod nvme_fabrics 00:25:25.999 rmmod nvme_keyring 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 753998 ']' 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 753998 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 753998 ']' 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 753998 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:25.999 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 753998 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 753998' 00:25:26.259 killing process with pid 753998 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 753998 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 753998 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.259 13:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:28.807 00:25:28.807 real 0m39.970s 00:25:28.807 user 2m3.353s 00:25:28.807 sys 0m8.418s 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:28.807 ************************************ 00:25:28.807 END TEST nvmf_failover 00:25:28.807 ************************************ 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.807 ************************************ 00:25:28.807 START TEST nvmf_host_discovery 00:25:28.807 ************************************ 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:28.807 * Looking for test storage... 00:25:28.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:28.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.807 --rc genhtml_branch_coverage=1 00:25:28.807 --rc genhtml_function_coverage=1 00:25:28.807 --rc genhtml_legend=1 00:25:28.807 --rc geninfo_all_blocks=1 00:25:28.807 --rc geninfo_unexecuted_blocks=1 00:25:28.807 00:25:28.807 ' 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:28.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.807 --rc genhtml_branch_coverage=1 00:25:28.807 --rc genhtml_function_coverage=1 00:25:28.807 --rc genhtml_legend=1 00:25:28.807 --rc geninfo_all_blocks=1 00:25:28.807 --rc geninfo_unexecuted_blocks=1 00:25:28.807 00:25:28.807 ' 00:25:28.807 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:28.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.807 --rc genhtml_branch_coverage=1 00:25:28.807 --rc genhtml_function_coverage=1 00:25:28.807 --rc genhtml_legend=1 00:25:28.808 --rc geninfo_all_blocks=1 00:25:28.808 --rc geninfo_unexecuted_blocks=1 00:25:28.808 00:25:28.808 ' 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:28.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.808 --rc genhtml_branch_coverage=1 00:25:28.808 --rc genhtml_function_coverage=1 00:25:28.808 --rc genhtml_legend=1 00:25:28.808 --rc geninfo_all_blocks=1 00:25:28.808 --rc geninfo_unexecuted_blocks=1 00:25:28.808 00:25:28.808 ' 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:28.808 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.951 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:36.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:36.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:36.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:36.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:36.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:25:36.952 00:25:36.952 --- 10.0.0.2 ping statistics --- 00:25:36.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.952 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:25:36.952 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:25:36.952 00:25:36.952 --- 10.0.0.1 ping statistics --- 00:25:36.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.952 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=764052 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 764052 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 764052 ']' 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:36.953 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.953 [2024-11-06 13:49:59.443817] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:25:36.953 [2024-11-06 13:49:59.443886] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.953 [2024-11-06 13:49:59.548548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.953 [2024-11-06 13:49:59.599106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.953 [2024-11-06 13:49:59.599159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.953 [2024-11-06 13:49:59.599168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.953 [2024-11-06 13:49:59.599175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.953 [2024-11-06 13:49:59.599181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.953 [2024-11-06 13:49:59.599958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.953 [2024-11-06 13:50:00.310250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.953 [2024-11-06 13:50:00.318476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.953 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.213 null0 00:25:37.213 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.213 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:37.213 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.213 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.213 null1 00:25:37.213 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.213 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:37.213 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=764299 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 764299 /tmp/host.sock 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 764299 ']' 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:37.214 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:37.214 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.214 [2024-11-06 13:50:00.406719] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:25:37.214 [2024-11-06 13:50:00.406797] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764299 ] 00:25:37.214 [2024-11-06 13:50:00.483969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.214 [2024-11-06 13:50:00.525868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.154 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.155 [2024-11-06 13:50:01.517485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.155 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:25:38.415 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:38.985 [2024-11-06 13:50:02.218026] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:38.985 [2024-11-06 13:50:02.218046] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:38.985 [2024-11-06 13:50:02.218059] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:38.985 [2024-11-06 13:50:02.304321] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:39.244 [2024-11-06 13:50:02.479461] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:39.244 [2024-11-06 13:50:02.480448] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11c6780:1 started. 00:25:39.244 [2024-11-06 13:50:02.482066] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:39.244 [2024-11-06 13:50:02.482084] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:39.244 [2024-11-06 13:50:02.487970] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11c6780 was disconnected and freed. delete nvme_qpair. 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:39.504 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.505 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:39.766 [2024-11-06 13:50:02.923752] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11c6b20:1 started. 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.766 [2024-11-06 13:50:02.928723] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11c6b20 was disconnected and freed. delete nvme_qpair. 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.766 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.766 [2024-11-06 13:50:02.997406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:39.766 [2024-11-06 13:50:02.997833] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:39.766 [2024-11-06 13:50:02.997855] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.766 [2024-11-06 13:50:03.086124] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:39.766 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.025 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:40.025 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:40.025 [2024-11-06 13:50:03.389716] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:40.025 [2024-11-06 13:50:03.389760] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:40.025 [2024-11-06 13:50:03.389770] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:40.025 [2024-11-06 13:50:03.389775] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.965 [2024-11-06 13:50:04.269149] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:40.965 [2024-11-06 13:50:04.269170] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.965 [2024-11-06 13:50:04.273865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.965 [2024-11-06 13:50:04.273889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.965 [2024-11-06 13:50:04.273899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.965 [2024-11-06 13:50:04.273906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.965 [2024-11-06 13:50:04.273914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.965 [2024-11-06 13:50:04.273922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.965 [2024-11-06 13:50:04.273930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.965 [2024-11-06 13:50:04.273937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.965 [2024-11-06 13:50:04.273945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1196e10 is same with the state(6) to be set 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.965 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.965 [2024-11-06 13:50:04.283879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1196e10 (9): Bad file descriptor 00:25:40.965 [2024-11-06 13:50:04.293917] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:40.966 [2024-11-06 13:50:04.293930] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:40.966 [2024-11-06 13:50:04.293935] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.293941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:40.966 [2024-11-06 13:50:04.293959] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.294279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.966 [2024-11-06 13:50:04.294294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1196e10 with addr=10.0.0.2, port=4420 00:25:40.966 [2024-11-06 13:50:04.294302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1196e10 is same with the state(6) to be set 00:25:40.966 [2024-11-06 13:50:04.294314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1196e10 (9): Bad file descriptor 00:25:40.966 [2024-11-06 13:50:04.294332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:40.966 [2024-11-06 13:50:04.294346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:40.966 [2024-11-06 13:50:04.294354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:40.966 [2024-11-06 13:50:04.294361] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:40.966 [2024-11-06 13:50:04.294366] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:40.966 [2024-11-06 13:50:04.294371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.966 [2024-11-06 13:50:04.303990] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:40.966 [2024-11-06 13:50:04.304001] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:40.966 [2024-11-06 13:50:04.304006] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.304011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:40.966 [2024-11-06 13:50:04.304025] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.304326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.966 [2024-11-06 13:50:04.304340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1196e10 with addr=10.0.0.2, port=4420 00:25:40.966 [2024-11-06 13:50:04.304348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1196e10 is same with the state(6) to be set 00:25:40.966 [2024-11-06 13:50:04.304360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1196e10 (9): Bad file descriptor 00:25:40.966 [2024-11-06 13:50:04.304378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:40.966 [2024-11-06 13:50:04.304386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:40.966 [2024-11-06 13:50:04.304394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:40.966 [2024-11-06 13:50:04.304400] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:40.966 [2024-11-06 13:50:04.304404] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:40.966 [2024-11-06 13:50:04.304409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:40.966 [2024-11-06 13:50:04.314058] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:40.966 [2024-11-06 13:50:04.314072] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:40.966 [2024-11-06 13:50:04.314077] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.314081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:40.966 [2024-11-06 13:50:04.314096] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.314436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.966 [2024-11-06 13:50:04.314450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1196e10 with addr=10.0.0.2, port=4420 00:25:40.966 [2024-11-06 13:50:04.314457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1196e10 is same with the state(6) to be set 00:25:40.966 [2024-11-06 13:50:04.314473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1196e10 (9): Bad file descriptor 00:25:40.966 [2024-11-06 13:50:04.314516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:40.966 [2024-11-06 13:50:04.314525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:40.966 [2024-11-06 13:50:04.314533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:40.966 [2024-11-06 13:50:04.314540] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:40.966 [2024-11-06 13:50:04.314545] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:40.966 [2024-11-06 13:50:04.314550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:40.966 [2024-11-06 13:50:04.324127] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:40.966 [2024-11-06 13:50:04.324140] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:40.966 [2024-11-06 13:50:04.324145] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.324150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:40.966 [2024-11-06 13:50:04.324165] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.324471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.966 [2024-11-06 13:50:04.324484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1196e10 with addr=10.0.0.2, port=4420 00:25:40.966 [2024-11-06 13:50:04.324492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1196e10 is same with the state(6) to be set 00:25:40.966 [2024-11-06 13:50:04.324504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1196e10 (9): Bad file descriptor 00:25:40.966 [2024-11-06 13:50:04.324516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:40.966 [2024-11-06 13:50:04.324523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:40.966 [2024-11-06 13:50:04.324530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:40.966 [2024-11-06 13:50:04.324537] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:40.966 [2024-11-06 13:50:04.324541] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:40.966 [2024-11-06 13:50:04.324546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.966 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.966 [2024-11-06 13:50:04.334197] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:40.966 [2024-11-06 13:50:04.334209] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:40.966 [2024-11-06 13:50:04.334214] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.334219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:40.966 [2024-11-06 13:50:04.334233] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:40.966 [2024-11-06 13:50:04.334573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.966 [2024-11-06 13:50:04.334588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1196e10 with addr=10.0.0.2, port=4420 00:25:40.966 [2024-11-06 13:50:04.334596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1196e10 is same with the state(6) to be set 00:25:40.966 [2024-11-06 13:50:04.334610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1196e10 (9): Bad file descriptor 00:25:40.966 [2024-11-06 13:50:04.334628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:40.966 [2024-11-06 13:50:04.334635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:40.966 [2024-11-06 13:50:04.334643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:40.966 [2024-11-06 13:50:04.334650] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:40.966 [2024-11-06 13:50:04.334656] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:40.966 [2024-11-06 13:50:04.334660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.227 [2024-11-06 13:50:04.344265] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.227 [2024-11-06 13:50:04.344279] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.228 [2024-11-06 13:50:04.344284] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.228 [2024-11-06 13:50:04.344289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.228 [2024-11-06 13:50:04.344304] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.228 [2024-11-06 13:50:04.344673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.228 [2024-11-06 13:50:04.344686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1196e10 with addr=10.0.0.2, port=4420 00:25:41.228 [2024-11-06 13:50:04.344695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1196e10 is same with the state(6) to be set 00:25:41.228 [2024-11-06 13:50:04.344707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1196e10 (9): Bad file descriptor 00:25:41.228 [2024-11-06 13:50:04.344725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.228 [2024-11-06 13:50:04.344736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.228 [2024-11-06 13:50:04.344744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.228 [2024-11-06 13:50:04.344755] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.228 [2024-11-06 13:50:04.344760] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.228 [2024-11-06 13:50:04.344764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.228 [2024-11-06 13:50:04.354335] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.228 [2024-11-06 13:50:04.354347] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.228 [2024-11-06 13:50:04.354351] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.228 [2024-11-06 13:50:04.354356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.228 [2024-11-06 13:50:04.354370] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.228 [2024-11-06 13:50:04.354711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.228 [2024-11-06 13:50:04.354724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1196e10 with addr=10.0.0.2, port=4420 00:25:41.228 [2024-11-06 13:50:04.354732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1196e10 is same with the state(6) to be set 00:25:41.228 [2024-11-06 13:50:04.354743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1196e10 (9): Bad file descriptor 00:25:41.228 [2024-11-06 13:50:04.354765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.228 [2024-11-06 13:50:04.354772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.228 [2024-11-06 13:50:04.354779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.228 [2024-11-06 13:50:04.354785] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.228 [2024-11-06 13:50:04.354790] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.228 [2024-11-06 13:50:04.354794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.228 [2024-11-06 13:50:04.357774] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:41.228 [2024-11-06 13:50:04.357792] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.228 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.489 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.427 [2024-11-06 13:50:05.707190] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:42.427 [2024-11-06 13:50:05.707208] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:42.427 [2024-11-06 13:50:05.707220] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.688 [2024-11-06 13:50:05.835631] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:42.688 [2024-11-06 13:50:05.899341] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:42.688 [2024-11-06 13:50:05.900082] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1193ab0:1 started. 00:25:42.688 [2024-11-06 13:50:05.901898] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:42.688 [2024-11-06 13:50:05.901925] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:42.688 [2024-11-06 13:50:05.906596] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1193ab0 was disconnected and freed. delete nvme_qpair. 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.688 request: 00:25:42.688 { 00:25:42.688 "name": "nvme", 00:25:42.688 "trtype": "tcp", 00:25:42.688 "traddr": "10.0.0.2", 00:25:42.688 "adrfam": "ipv4", 00:25:42.688 "trsvcid": "8009", 00:25:42.688 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:42.688 "wait_for_attach": true, 00:25:42.688 "method": "bdev_nvme_start_discovery", 00:25:42.688 "req_id": 1 00:25:42.688 } 00:25:42.688 Got JSON-RPC error response 00:25:42.688 response: 00:25:42.688 { 00:25:42.688 "code": -17, 00:25:42.688 "message": "File exists" 00:25:42.688 } 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.688 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.688 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.688 request: 00:25:42.688 { 00:25:42.688 "name": "nvme_second", 00:25:42.688 "trtype": "tcp", 00:25:42.688 "traddr": "10.0.0.2", 00:25:42.688 "adrfam": "ipv4", 00:25:42.689 "trsvcid": "8009", 00:25:42.689 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:42.689 "wait_for_attach": true, 00:25:42.689 "method": "bdev_nvme_start_discovery", 00:25:42.689 "req_id": 1 00:25:42.689 } 00:25:42.689 Got JSON-RPC error response 00:25:42.689 response: 00:25:42.689 { 00:25:42.689 "code": -17, 00:25:42.689 "message": "File exists" 00:25:42.689 } 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.689 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.949 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.887 [2024-11-06 13:50:07.158183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.887 [2024-11-06 13:50:07.158213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11adb00 with addr=10.0.0.2, port=8010 00:25:43.887 [2024-11-06 13:50:07.158226] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:43.887 [2024-11-06 13:50:07.158233] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:43.887 [2024-11-06 13:50:07.158239] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:44.826 [2024-11-06 13:50:08.160616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.826 [2024-11-06 13:50:08.160641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11adb00 with addr=10.0.0.2, port=8010 00:25:44.826 [2024-11-06 13:50:08.160652] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:44.826 [2024-11-06 13:50:08.160659] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:44.826 [2024-11-06 13:50:08.160665] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:46.207 [2024-11-06 13:50:09.162629] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:46.207 request: 00:25:46.207 { 00:25:46.207 "name": "nvme_second", 00:25:46.207 "trtype": "tcp", 00:25:46.207 "traddr": "10.0.0.2", 00:25:46.207 "adrfam": "ipv4", 00:25:46.207 "trsvcid": "8010", 00:25:46.207 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:46.207 "wait_for_attach": false, 00:25:46.207 "attach_timeout_ms": 3000, 00:25:46.207 "method": "bdev_nvme_start_discovery", 00:25:46.207 "req_id": 1 00:25:46.207 } 00:25:46.207 Got JSON-RPC error response 00:25:46.207 response: 00:25:46.207 { 00:25:46.207 "code": -110, 00:25:46.207 "message": "Connection timed out" 00:25:46.207 } 00:25:46.207 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:46.207 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 764299 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.208 rmmod nvme_tcp 00:25:46.208 rmmod nvme_fabrics 00:25:46.208 rmmod nvme_keyring 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 764052 ']' 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 764052 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 764052 ']' 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 764052 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 764052 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 764052' 00:25:46.208 killing process with pid 764052 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 764052 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 764052 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.208 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.747 00:25:48.747 real 0m19.828s 00:25:48.747 user 0m22.652s 00:25:48.747 sys 0m7.140s 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.747 ************************************ 00:25:48.747 END TEST nvmf_host_discovery 00:25:48.747 ************************************ 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.747 ************************************ 00:25:48.747 START TEST nvmf_host_multipath_status 00:25:48.747 ************************************ 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:48.747 * Looking for test storage... 00:25:48.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.747 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:48.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.748 --rc genhtml_branch_coverage=1 00:25:48.748 --rc genhtml_function_coverage=1 00:25:48.748 --rc genhtml_legend=1 00:25:48.748 --rc geninfo_all_blocks=1 00:25:48.748 --rc geninfo_unexecuted_blocks=1 00:25:48.748 00:25:48.748 ' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:48.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.748 --rc genhtml_branch_coverage=1 00:25:48.748 --rc genhtml_function_coverage=1 00:25:48.748 --rc genhtml_legend=1 00:25:48.748 --rc geninfo_all_blocks=1 00:25:48.748 --rc geninfo_unexecuted_blocks=1 00:25:48.748 00:25:48.748 ' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:48.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.748 --rc genhtml_branch_coverage=1 00:25:48.748 --rc genhtml_function_coverage=1 00:25:48.748 --rc genhtml_legend=1 00:25:48.748 --rc geninfo_all_blocks=1 00:25:48.748 --rc geninfo_unexecuted_blocks=1 00:25:48.748 00:25:48.748 ' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:48.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.748 --rc genhtml_branch_coverage=1 00:25:48.748 --rc genhtml_function_coverage=1 00:25:48.748 --rc genhtml_legend=1 00:25:48.748 --rc geninfo_all_blocks=1 00:25:48.748 --rc geninfo_unexecuted_blocks=1 00:25:48.748 00:25:48.748 ' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:48.748 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:48.749 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.888 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.888 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:56.889 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:56.889 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:56.889 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:56.889 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:56.889 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:56.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:25:56.889 00:25:56.889 --- 10.0.0.2 ping statistics --- 00:25:56.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.889 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:25:56.889 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:25:56.889 00:25:56.889 --- 10.0.0.1 ping statistics --- 00:25:56.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.890 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=770395 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 770395 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 770395 ']' 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:56.890 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.890 [2024-11-06 13:50:19.426164] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:25:56.890 [2024-11-06 13:50:19.426231] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.890 [2024-11-06 13:50:19.508613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:56.890 [2024-11-06 13:50:19.549366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.890 [2024-11-06 13:50:19.549403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.890 [2024-11-06 13:50:19.549411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.890 [2024-11-06 13:50:19.549418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.890 [2024-11-06 13:50:19.549425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.890 [2024-11-06 13:50:19.550782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.890 [2024-11-06 13:50:19.550787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.890 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:56.890 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:56.890 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:56.890 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:56.890 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:57.150 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.150 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=770395 00:25:57.150 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:57.150 [2024-11-06 13:50:20.429453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.150 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:57.409 Malloc0 00:25:57.409 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:57.669 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.669 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.929 [2024-11-06 13:50:21.116044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:57.929 [2024-11-06 13:50:21.284441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=770841 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 770841 /var/tmp/bdevperf.sock 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 770841 ']' 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:57.929 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.188 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:58.188 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:58.189 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:58.448 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:59.016 Nvme0n1 00:25:59.016 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:59.276 Nvme0n1 00:25:59.276 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:59.276 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:01.185 13:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:01.185 13:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:01.448 13:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:01.707 13:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:02.646 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:02.646 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.646 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.646 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.905 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.905 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:02.906 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.906 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.165 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.165 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.165 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.165 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.165 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.165 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.165 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.165 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.424 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.424 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.424 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.424 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.684 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.684 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:03.684 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.684 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.684 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.684 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:03.684 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:03.944 13:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.204 13:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:05.143 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:05.143 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:05.143 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.143 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.403 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.403 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:05.403 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.403 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.403 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.403 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.403 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.403 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.662 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.662 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.662 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.662 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.922 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.922 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.922 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.922 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.182 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.182 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.182 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.182 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.182 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.182 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:06.182 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.443 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:06.703 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:07.641 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:07.641 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.641 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.641 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.901 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.901 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:07.901 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.901 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.901 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.901 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.901 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.901 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.161 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.161 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.161 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.161 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.422 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.422 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.423 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.423 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.423 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.423 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.423 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.423 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.707 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.707 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:08.707 13:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:08.967 13:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:08.967 13:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:10.349 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.350 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.609 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.609 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.609 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.609 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.869 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.869 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.869 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.869 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.869 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.869 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:10.869 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.869 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.129 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.129 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:11.129 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:11.388 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:11.388 13:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:12.770 13:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:12.770 13:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:12.770 13:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.770 13:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.770 13:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.770 13:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:12.770 13:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.770 13:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.770 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.770 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.770 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.770 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.029 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.029 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.029 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.029 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.289 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.289 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:13.289 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.289 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.289 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.289 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:13.290 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.290 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.549 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.549 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:13.549 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:13.808 13:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:13.808 13:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.188 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.448 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.448 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.448 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.448 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.707 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.707 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:15.707 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.707 13:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.707 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.707 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:15.707 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.707 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.966 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.966 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:16.226 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:16.226 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:16.226 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.494 13:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:17.437 13:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:17.437 13:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:17.437 13:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.437 13:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.697 13:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.697 13:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.697 13:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.697 13:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.957 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.957 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.957 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.957 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.217 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.217 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.217 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.218 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.218 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.218 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.218 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.218 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.479 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.479 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.479 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.479 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.740 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.740 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:18.740 13:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:19.001 13:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:19.001 13:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:19.993 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:19.993 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:19.993 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.993 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.254 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.254 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:20.254 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.254 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.514 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.515 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.515 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.515 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.515 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.515 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.515 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.515 13:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.775 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.775 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.775 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.775 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.035 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.035 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.035 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.035 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.035 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.035 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:21.035 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.296 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:21.556 13:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:22.498 13:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:22.499 13:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:22.499 13:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.499 13:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.758 13:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.758 13:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.758 13:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.758 13:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.017 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.017 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.017 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.017 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.017 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.017 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.017 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.017 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.277 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.277 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.277 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.277 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.538 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.538 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.538 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.538 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.538 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.538 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:23.538 13:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.799 13:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:24.060 13:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:25.001 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:25.001 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:25.001 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.001 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.262 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.262 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:25.262 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.262 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.522 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.522 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.522 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.522 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.522 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.522 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.522 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.522 13:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.783 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.783 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.783 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.783 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 770841 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 770841 ']' 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 770841 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:26.086 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 770841 00:26:26.388 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:26.388 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:26.388 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 770841' 00:26:26.388 killing process with pid 770841 00:26:26.388 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 770841 00:26:26.388 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 770841 00:26:26.388 { 00:26:26.388 "results": [ 00:26:26.388 { 00:26:26.388 "job": "Nvme0n1", 00:26:26.388 "core_mask": "0x4", 00:26:26.388 "workload": "verify", 00:26:26.388 "status": "terminated", 00:26:26.388 "verify_range": { 00:26:26.388 "start": 0, 00:26:26.388 "length": 16384 00:26:26.388 }, 00:26:26.388 "queue_depth": 128, 00:26:26.388 "io_size": 4096, 00:26:26.388 "runtime": 26.771751, 00:26:26.388 "iops": 10878.743045234509, 00:26:26.388 "mibps": 42.4950900204473, 00:26:26.388 "io_failed": 0, 00:26:26.388 "io_timeout": 0, 00:26:26.388 "avg_latency_us": 11748.7095126292, 00:26:26.388 "min_latency_us": 298.6666666666667, 00:26:26.388 "max_latency_us": 3019898.88 00:26:26.388 } 00:26:26.388 ], 00:26:26.388 "core_count": 1 00:26:26.388 } 00:26:26.388 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 770841 00:26:26.389 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:26.389 [2024-11-06 13:50:21.348665] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:26:26.389 [2024-11-06 13:50:21.348726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770841 ] 00:26:26.389 [2024-11-06 13:50:21.407058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.389 [2024-11-06 13:50:21.436480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.389 Running I/O for 90 seconds... 00:26:26.389 9675.00 IOPS, 37.79 MiB/s [2024-11-06T12:50:49.765Z] 9617.50 IOPS, 37.57 MiB/s [2024-11-06T12:50:49.765Z] 9641.67 IOPS, 37.66 MiB/s [2024-11-06T12:50:49.765Z] 9657.25 IOPS, 37.72 MiB/s [2024-11-06T12:50:49.765Z] 9954.60 IOPS, 38.89 MiB/s [2024-11-06T12:50:49.765Z] 10477.33 IOPS, 40.93 MiB/s [2024-11-06T12:50:49.765Z] 10840.43 IOPS, 42.35 MiB/s [2024-11-06T12:50:49.765Z] 10786.62 IOPS, 42.14 MiB/s [2024-11-06T12:50:49.765Z] 10677.44 IOPS, 41.71 MiB/s [2024-11-06T12:50:49.765Z] 10573.50 IOPS, 41.30 MiB/s [2024-11-06T12:50:49.765Z] 10489.45 IOPS, 40.97 MiB/s [2024-11-06T12:50:49.765Z] [2024-11-06 13:50:34.526432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.389 [2024-11-06 13:50:34.526466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:26.389 [2024-11-06 13:50:34.526497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.389 [2024-11-06 13:50:34.526504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:26.389 [2024-11-06 13:50:34.526515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.389 [2024-11-06 13:50:34.526521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:26.389 [2024-11-06 13:50:34.526531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-11-06 13:50:34.526536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:26.389 [2024-11-06 13:50:34.526547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-11-06 13:50:34.526552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:26.389 [2024-11-06 13:50:34.526563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-11-06 13:50:34.526568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:26.389 [2024-11-06 13:50:34.526578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-11-06 13:50:34.526584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:26.389 [2024-11-06 13:50:34.526594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-11-06 13:50:34.526599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.390 [2024-11-06 13:50:34.526683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-11-06 13:50:34.526832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.390 [2024-11-06 13:50:34.526843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.526849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:26.391 [2024-11-06 13:50:34.526860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.526865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:26.391 [2024-11-06 13:50:34.526877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.526883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:26.391 [2024-11-06 13:50:34.526898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.526905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:26.391 [2024-11-06 13:50:34.526917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.526922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:26.391 [2024-11-06 13:50:34.526933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.526940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.391 [2024-11-06 13:50:34.526952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.526959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.391 [2024-11-06 13:50:34.526970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.526976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:26.391 [2024-11-06 13:50:34.526987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.526994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:26.391 [2024-11-06 13:50:34.527005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.391 [2024-11-06 13:50:34.527010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.392 [2024-11-06 13:50:34.527026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.392 [2024-11-06 13:50:34.527042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.392 [2024-11-06 13:50:34.527058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.392 [2024-11-06 13:50:34.527075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.392 [2024-11-06 13:50:34.527093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.392 [2024-11-06 13:50:34.527111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.392 [2024-11-06 13:50:34.527128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.392 [2024-11-06 13:50:34.527144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.392 [2024-11-06 13:50:34.527161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:26.392 [2024-11-06 13:50:34.527172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.393 [2024-11-06 13:50:34.527592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.527608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.527625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.527642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.527900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.527921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.527940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.527961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.527982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.527996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.393 [2024-11-06 13:50:34.528249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:26.393 [2024-11-06 13:50:34.528264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:34.528734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.528959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.528966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:34.529272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:34.529277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.394 10310.75 IOPS, 40.28 MiB/s [2024-11-06T12:50:49.770Z] 9517.62 IOPS, 37.18 MiB/s [2024-11-06T12:50:49.770Z] 8837.79 IOPS, 34.52 MiB/s [2024-11-06T12:50:49.770Z] 8363.13 IOPS, 32.67 MiB/s [2024-11-06T12:50:49.770Z] 8659.56 IOPS, 33.83 MiB/s [2024-11-06T12:50:49.770Z] 8941.29 IOPS, 34.93 MiB/s [2024-11-06T12:50:49.770Z] 9383.50 IOPS, 36.65 MiB/s [2024-11-06T12:50:49.770Z] 9780.68 IOPS, 38.21 MiB/s [2024-11-06T12:50:49.770Z] 10045.30 IOPS, 39.24 MiB/s [2024-11-06T12:50:49.770Z] 10189.14 IOPS, 39.80 MiB/s [2024-11-06T12:50:49.770Z] 10320.50 IOPS, 40.31 MiB/s [2024-11-06T12:50:49.770Z] 10593.61 IOPS, 41.38 MiB/s [2024-11-06T12:50:49.770Z] 10856.83 IOPS, 42.41 MiB/s [2024-11-06T12:50:49.770Z] [2024-11-06 13:50:47.246612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:47.246653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:47.246688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:47.246704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:47.246720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:47.246736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:47.246756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:47.246772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:47.246787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:47.246803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.394 [2024-11-06 13:50:47.246825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:26.394 [2024-11-06 13:50:47.246835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.394 [2024-11-06 13:50:47.246841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.246852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.246857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.246867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.246873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.246883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.246889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.246899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.246905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.246915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.246920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.395 [2024-11-06 13:50:47.248118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.395 [2024-11-06 13:50:47.248136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.395 [2024-11-06 13:50:47.248298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.395 [2024-11-06 13:50:47.248314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.395 [2024-11-06 13:50:47.248329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.395 [2024-11-06 13:50:47.248345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:26.395 [2024-11-06 13:50:47.248356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.395 [2024-11-06 13:50:47.248361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:26.395 10972.04 IOPS, 42.86 MiB/s [2024-11-06T12:50:49.771Z] 10915.35 IOPS, 42.64 MiB/s [2024-11-06T12:50:49.771Z] Received shutdown signal, test time was about 26.772362 seconds 00:26:26.395 00:26:26.395 Latency(us) 00:26:26.395 [2024-11-06T12:50:49.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.395 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:26.395 Verification LBA range: start 0x0 length 0x4000 00:26:26.395 Nvme0n1 : 26.77 10878.74 42.50 0.00 0.00 11748.71 298.67 3019898.88 00:26:26.395 [2024-11-06T12:50:49.771Z] =================================================================================================================== 00:26:26.395 [2024-11-06T12:50:49.771Z] Total : 10878.74 42.50 0.00 0.00 11748.71 298.67 3019898.88 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:26.395 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.396 rmmod nvme_tcp 00:26:26.713 rmmod nvme_fabrics 00:26:26.713 rmmod nvme_keyring 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 770395 ']' 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 770395 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 770395 ']' 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 770395 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 770395 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 770395' 00:26:26.713 killing process with pid 770395 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 770395 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 770395 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:26.713 13:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:26.713 13:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:26.713 13:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:26.713 13:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.713 13:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.713 13:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.293 00:26:29.293 real 0m40.454s 00:26:29.293 user 1m43.846s 00:26:29.293 sys 0m11.724s 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:29.293 ************************************ 00:26:29.293 END TEST nvmf_host_multipath_status 00:26:29.293 ************************************ 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.293 ************************************ 00:26:29.293 START TEST nvmf_discovery_remove_ifc 00:26:29.293 ************************************ 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:29.293 * Looking for test storage... 00:26:29.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.293 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:29.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.294 --rc genhtml_branch_coverage=1 00:26:29.294 --rc genhtml_function_coverage=1 00:26:29.294 --rc genhtml_legend=1 00:26:29.294 --rc geninfo_all_blocks=1 00:26:29.294 --rc geninfo_unexecuted_blocks=1 00:26:29.294 00:26:29.294 ' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:29.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.294 --rc genhtml_branch_coverage=1 00:26:29.294 --rc genhtml_function_coverage=1 00:26:29.294 --rc genhtml_legend=1 00:26:29.294 --rc geninfo_all_blocks=1 00:26:29.294 --rc geninfo_unexecuted_blocks=1 00:26:29.294 00:26:29.294 ' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:29.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.294 --rc genhtml_branch_coverage=1 00:26:29.294 --rc genhtml_function_coverage=1 00:26:29.294 --rc genhtml_legend=1 00:26:29.294 --rc geninfo_all_blocks=1 00:26:29.294 --rc geninfo_unexecuted_blocks=1 00:26:29.294 00:26:29.294 ' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:29.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.294 --rc genhtml_branch_coverage=1 00:26:29.294 --rc genhtml_function_coverage=1 00:26:29.294 --rc genhtml_legend=1 00:26:29.294 --rc geninfo_all_blocks=1 00:26:29.294 --rc geninfo_unexecuted_blocks=1 00:26:29.294 00:26:29.294 ' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.294 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.295 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.295 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.295 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.295 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.295 13:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:37.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:37.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.443 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:37.444 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:37.444 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:37.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:26:37.444 00:26:37.444 --- 10.0.0.2 ping statistics --- 00:26:37.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.444 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:26:37.444 00:26:37.444 --- 10.0.0.1 ping statistics --- 00:26:37.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.444 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=780480 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 780480 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 780480 ']' 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:37.444 13:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.444 [2024-11-06 13:50:59.767718] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:26:37.444 [2024-11-06 13:50:59.767781] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.444 [2024-11-06 13:50:59.864484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.444 [2024-11-06 13:50:59.914150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.444 [2024-11-06 13:50:59.914204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.444 [2024-11-06 13:50:59.914213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.444 [2024-11-06 13:50:59.914221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.444 [2024-11-06 13:50:59.914227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.444 [2024-11-06 13:50:59.914973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.444 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:37.444 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:37.444 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:37.444 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:37.444 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.444 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.444 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:37.444 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.444 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.444 [2024-11-06 13:51:00.601722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.445 [2024-11-06 13:51:00.609892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:37.445 null0 00:26:37.445 [2024-11-06 13:51:00.641898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=780787 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 780787 /tmp/host.sock 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 780787 ']' 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:37.445 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:37.445 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.445 [2024-11-06 13:51:00.690266] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:26:37.445 [2024-11-06 13:51:00.690308] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780787 ] 00:26:37.445 [2024-11-06 13:51:00.754390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.445 [2024-11-06 13:51:00.790273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.705 13:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.645 [2024-11-06 13:51:01.929792] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:38.645 [2024-11-06 13:51:01.929812] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:38.645 [2024-11-06 13:51:01.929826] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:38.645 [2024-11-06 13:51:02.018118] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:38.904 [2024-11-06 13:51:02.118971] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:38.904 [2024-11-06 13:51:02.119961] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x25483f0:1 started. 00:26:38.904 [2024-11-06 13:51:02.121533] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:38.904 [2024-11-06 13:51:02.121575] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:38.904 [2024-11-06 13:51:02.121595] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:38.904 [2024-11-06 13:51:02.121608] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:38.904 [2024-11-06 13:51:02.121628] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.904 [2024-11-06 13:51:02.127976] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x25483f0 was disconnected and freed. delete nvme_qpair. 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:38.904 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.164 13:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.103 13:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.042 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.042 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.042 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.042 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.042 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.042 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.042 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.302 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.302 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:41.302 13:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.243 13:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.184 13:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.568 [2024-11-06 13:51:07.562293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:44.568 [2024-11-06 13:51:07.562336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.568 [2024-11-06 13:51:07.562349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.568 [2024-11-06 13:51:07.562359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.568 [2024-11-06 13:51:07.562367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.568 [2024-11-06 13:51:07.562375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.568 [2024-11-06 13:51:07.562382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.568 [2024-11-06 13:51:07.562390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.568 [2024-11-06 13:51:07.562397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.568 [2024-11-06 13:51:07.562406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.568 [2024-11-06 13:51:07.562413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.568 [2024-11-06 13:51:07.562421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524c00 is same with the state(6) to be set 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.568 [2024-11-06 13:51:07.572315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2524c00 (9): Bad file descriptor 00:26:44.568 [2024-11-06 13:51:07.582352] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:44.568 [2024-11-06 13:51:07.582366] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:44.568 [2024-11-06 13:51:07.582371] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.568 [2024-11-06 13:51:07.582376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.568 [2024-11-06 13:51:07.582396] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.568 13:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.510 [2024-11-06 13:51:08.596788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:45.510 [2024-11-06 13:51:08.596830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2524c00 with addr=10.0.0.2, port=4420 00:26:45.510 [2024-11-06 13:51:08.596847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524c00 is same with the state(6) to be set 00:26:45.510 [2024-11-06 13:51:08.596873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2524c00 (9): Bad file descriptor 00:26:45.510 [2024-11-06 13:51:08.596920] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:45.510 [2024-11-06 13:51:08.596940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:45.510 [2024-11-06 13:51:08.596948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:45.510 [2024-11-06 13:51:08.596957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:45.510 [2024-11-06 13:51:08.596965] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:45.510 [2024-11-06 13:51:08.596970] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:45.510 [2024-11-06 13:51:08.596975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:45.510 [2024-11-06 13:51:08.596984] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:45.510 [2024-11-06 13:51:08.596989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.510 13:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.452 [2024-11-06 13:51:09.599363] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:46.452 [2024-11-06 13:51:09.599384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:46.452 [2024-11-06 13:51:09.599398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:46.452 [2024-11-06 13:51:09.599406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:46.452 [2024-11-06 13:51:09.599414] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:46.452 [2024-11-06 13:51:09.599422] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:46.452 [2024-11-06 13:51:09.599427] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:46.452 [2024-11-06 13:51:09.599432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:46.452 [2024-11-06 13:51:09.599455] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:46.452 [2024-11-06 13:51:09.599476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.452 [2024-11-06 13:51:09.599490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.452 [2024-11-06 13:51:09.599501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.453 [2024-11-06 13:51:09.599508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.453 [2024-11-06 13:51:09.599516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.453 [2024-11-06 13:51:09.599525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.453 [2024-11-06 13:51:09.599533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.453 [2024-11-06 13:51:09.599541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.453 [2024-11-06 13:51:09.599550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.453 [2024-11-06 13:51:09.599558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.453 [2024-11-06 13:51:09.599566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:46.453 [2024-11-06 13:51:09.599592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2514340 (9): Bad file descriptor 00:26:46.453 [2024-11-06 13:51:09.600590] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:46.453 [2024-11-06 13:51:09.600602] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.453 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.713 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.713 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:46.713 13:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.656 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.657 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.657 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.657 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.657 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.657 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.657 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.657 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.657 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:47.657 13:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:48.598 [2024-11-06 13:51:11.613619] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:48.598 [2024-11-06 13:51:11.613638] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:48.598 [2024-11-06 13:51:11.613651] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:48.598 [2024-11-06 13:51:11.745083] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:48.598 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.598 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.598 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.598 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.598 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.598 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.598 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.598 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.598 [2024-11-06 13:51:11.965356] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:48.598 [2024-11-06 13:51:11.966235] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2519130:1 started. 00:26:48.598 [2024-11-06 13:51:11.967475] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:48.598 [2024-11-06 13:51:11.967510] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:48.598 [2024-11-06 13:51:11.967530] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:48.598 [2024-11-06 13:51:11.967545] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:48.598 [2024-11-06 13:51:11.967554] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:48.598 [2024-11-06 13:51:11.972079] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2519130 was disconnected and freed. delete nvme_qpair. 00:26:48.858 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:48.858 13:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:49.799 13:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.799 13:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.799 13:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.799 13:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.799 13:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.799 13:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.799 13:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 780787 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 780787 ']' 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 780787 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 780787 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 780787' 00:26:49.799 killing process with pid 780787 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 780787 00:26:49.799 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 780787 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:50.060 rmmod nvme_tcp 00:26:50.060 rmmod nvme_fabrics 00:26:50.060 rmmod nvme_keyring 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 780480 ']' 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 780480 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 780480 ']' 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 780480 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 780480 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 780480' 00:26:50.060 killing process with pid 780480 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 780480 00:26:50.060 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 780480 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.321 13:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.235 13:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:52.235 00:26:52.235 real 0m23.369s 00:26:52.235 user 0m27.687s 00:26:52.235 sys 0m6.958s 00:26:52.235 13:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:52.235 13:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.235 ************************************ 00:26:52.235 END TEST nvmf_discovery_remove_ifc 00:26:52.235 ************************************ 00:26:52.235 13:51:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:52.235 13:51:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:52.235 13:51:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:52.235 13:51:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.235 ************************************ 00:26:52.235 START TEST nvmf_identify_kernel_target 00:26:52.235 ************************************ 00:26:52.235 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:52.497 * Looking for test storage... 00:26:52.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:52.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.497 --rc genhtml_branch_coverage=1 00:26:52.497 --rc genhtml_function_coverage=1 00:26:52.497 --rc genhtml_legend=1 00:26:52.497 --rc geninfo_all_blocks=1 00:26:52.497 --rc geninfo_unexecuted_blocks=1 00:26:52.497 00:26:52.497 ' 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:52.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.497 --rc genhtml_branch_coverage=1 00:26:52.497 --rc genhtml_function_coverage=1 00:26:52.497 --rc genhtml_legend=1 00:26:52.497 --rc geninfo_all_blocks=1 00:26:52.497 --rc geninfo_unexecuted_blocks=1 00:26:52.497 00:26:52.497 ' 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:52.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.497 --rc genhtml_branch_coverage=1 00:26:52.497 --rc genhtml_function_coverage=1 00:26:52.497 --rc genhtml_legend=1 00:26:52.497 --rc geninfo_all_blocks=1 00:26:52.497 --rc geninfo_unexecuted_blocks=1 00:26:52.497 00:26:52.497 ' 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:52.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.497 --rc genhtml_branch_coverage=1 00:26:52.497 --rc genhtml_function_coverage=1 00:26:52.497 --rc genhtml_legend=1 00:26:52.497 --rc geninfo_all_blocks=1 00:26:52.497 --rc geninfo_unexecuted_blocks=1 00:26:52.497 00:26:52.497 ' 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.497 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:52.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:52.498 13:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:00.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:00.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:00.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:00.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.638 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.639 13:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:00.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:27:00.639 00:27:00.639 --- 10.0.0.2 ping statistics --- 00:27:00.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.639 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:27:00.639 00:27:00.639 --- 10.0.0.1 ping statistics --- 00:27:00.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.639 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:00.639 13:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:03.179 Waiting for block devices as requested 00:27:03.179 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:03.179 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:03.179 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:03.440 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:03.440 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:03.440 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:03.700 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:03.700 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:03.700 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:03.960 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:03.960 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:03.960 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:04.220 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:04.220 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:04.220 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:04.220 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:04.481 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:04.741 13:51:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:04.741 13:51:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:04.741 13:51:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:04.741 13:51:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:04.741 13:51:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:04.741 13:51:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:04.741 13:51:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:04.741 13:51:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:04.741 13:51:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:04.741 No valid GPT data, bailing 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:04.741 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:05.002 00:27:05.002 Discovery Log Number of Records 2, Generation counter 2 00:27:05.002 =====Discovery Log Entry 0====== 00:27:05.002 trtype: tcp 00:27:05.002 adrfam: ipv4 00:27:05.002 subtype: current discovery subsystem 00:27:05.002 treq: not specified, sq flow control disable supported 00:27:05.002 portid: 1 00:27:05.002 trsvcid: 4420 00:27:05.002 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:05.002 traddr: 10.0.0.1 00:27:05.002 eflags: none 00:27:05.002 sectype: none 00:27:05.002 =====Discovery Log Entry 1====== 00:27:05.002 trtype: tcp 00:27:05.002 adrfam: ipv4 00:27:05.002 subtype: nvme subsystem 00:27:05.002 treq: not specified, sq flow control disable supported 00:27:05.002 portid: 1 00:27:05.002 trsvcid: 4420 00:27:05.002 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:05.002 traddr: 10.0.0.1 00:27:05.002 eflags: none 00:27:05.002 sectype: none 00:27:05.002 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:05.002 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:05.002 ===================================================== 00:27:05.002 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:05.002 ===================================================== 00:27:05.002 Controller Capabilities/Features 00:27:05.002 ================================ 00:27:05.002 Vendor ID: 0000 00:27:05.002 Subsystem Vendor ID: 0000 00:27:05.002 Serial Number: bfc0d5d6476a6e8671d7 00:27:05.002 Model Number: Linux 00:27:05.002 Firmware Version: 6.8.9-20 00:27:05.002 Recommended Arb Burst: 0 00:27:05.002 IEEE OUI Identifier: 00 00 00 00:27:05.002 Multi-path I/O 00:27:05.003 May have multiple subsystem ports: No 00:27:05.003 May have multiple controllers: No 00:27:05.003 Associated with SR-IOV VF: No 00:27:05.003 Max Data Transfer Size: Unlimited 00:27:05.003 Max Number of Namespaces: 0 00:27:05.003 Max Number of I/O Queues: 1024 00:27:05.003 NVMe Specification Version (VS): 1.3 00:27:05.003 NVMe Specification Version (Identify): 1.3 00:27:05.003 Maximum Queue Entries: 1024 00:27:05.003 Contiguous Queues Required: No 00:27:05.003 Arbitration Mechanisms Supported 00:27:05.003 Weighted Round Robin: Not Supported 00:27:05.003 Vendor Specific: Not Supported 00:27:05.003 Reset Timeout: 7500 ms 00:27:05.003 Doorbell Stride: 4 bytes 00:27:05.003 NVM Subsystem Reset: Not Supported 00:27:05.003 Command Sets Supported 00:27:05.003 NVM Command Set: Supported 00:27:05.003 Boot Partition: Not Supported 00:27:05.003 Memory Page Size Minimum: 4096 bytes 00:27:05.003 Memory Page Size Maximum: 4096 bytes 00:27:05.003 Persistent Memory Region: Not Supported 00:27:05.003 Optional Asynchronous Events Supported 00:27:05.003 Namespace Attribute Notices: Not Supported 00:27:05.003 Firmware Activation Notices: Not Supported 00:27:05.003 ANA Change Notices: Not Supported 00:27:05.003 PLE Aggregate Log Change Notices: Not Supported 00:27:05.003 LBA Status Info Alert Notices: Not Supported 00:27:05.003 EGE Aggregate Log Change Notices: Not Supported 00:27:05.003 Normal NVM Subsystem Shutdown event: Not Supported 00:27:05.003 Zone Descriptor Change Notices: Not Supported 00:27:05.003 Discovery Log Change Notices: Supported 00:27:05.003 Controller Attributes 00:27:05.003 128-bit Host Identifier: Not Supported 00:27:05.003 Non-Operational Permissive Mode: Not Supported 00:27:05.003 NVM Sets: Not Supported 00:27:05.003 Read Recovery Levels: Not Supported 00:27:05.003 Endurance Groups: Not Supported 00:27:05.003 Predictable Latency Mode: Not Supported 00:27:05.003 Traffic Based Keep ALive: Not Supported 00:27:05.003 Namespace Granularity: Not Supported 00:27:05.003 SQ Associations: Not Supported 00:27:05.003 UUID List: Not Supported 00:27:05.003 Multi-Domain Subsystem: Not Supported 00:27:05.003 Fixed Capacity Management: Not Supported 00:27:05.003 Variable Capacity Management: Not Supported 00:27:05.003 Delete Endurance Group: Not Supported 00:27:05.003 Delete NVM Set: Not Supported 00:27:05.003 Extended LBA Formats Supported: Not Supported 00:27:05.003 Flexible Data Placement Supported: Not Supported 00:27:05.003 00:27:05.003 Controller Memory Buffer Support 00:27:05.003 ================================ 00:27:05.003 Supported: No 00:27:05.003 00:27:05.003 Persistent Memory Region Support 00:27:05.003 ================================ 00:27:05.003 Supported: No 00:27:05.003 00:27:05.003 Admin Command Set Attributes 00:27:05.003 ============================ 00:27:05.003 Security Send/Receive: Not Supported 00:27:05.003 Format NVM: Not Supported 00:27:05.003 Firmware Activate/Download: Not Supported 00:27:05.003 Namespace Management: Not Supported 00:27:05.003 Device Self-Test: Not Supported 00:27:05.003 Directives: Not Supported 00:27:05.003 NVMe-MI: Not Supported 00:27:05.003 Virtualization Management: Not Supported 00:27:05.003 Doorbell Buffer Config: Not Supported 00:27:05.003 Get LBA Status Capability: Not Supported 00:27:05.003 Command & Feature Lockdown Capability: Not Supported 00:27:05.003 Abort Command Limit: 1 00:27:05.003 Async Event Request Limit: 1 00:27:05.003 Number of Firmware Slots: N/A 00:27:05.003 Firmware Slot 1 Read-Only: N/A 00:27:05.003 Firmware Activation Without Reset: N/A 00:27:05.003 Multiple Update Detection Support: N/A 00:27:05.003 Firmware Update Granularity: No Information Provided 00:27:05.003 Per-Namespace SMART Log: No 00:27:05.003 Asymmetric Namespace Access Log Page: Not Supported 00:27:05.003 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:05.003 Command Effects Log Page: Not Supported 00:27:05.003 Get Log Page Extended Data: Supported 00:27:05.003 Telemetry Log Pages: Not Supported 00:27:05.003 Persistent Event Log Pages: Not Supported 00:27:05.003 Supported Log Pages Log Page: May Support 00:27:05.003 Commands Supported & Effects Log Page: Not Supported 00:27:05.003 Feature Identifiers & Effects Log Page:May Support 00:27:05.003 NVMe-MI Commands & Effects Log Page: May Support 00:27:05.003 Data Area 4 for Telemetry Log: Not Supported 00:27:05.003 Error Log Page Entries Supported: 1 00:27:05.003 Keep Alive: Not Supported 00:27:05.003 00:27:05.003 NVM Command Set Attributes 00:27:05.003 ========================== 00:27:05.003 Submission Queue Entry Size 00:27:05.003 Max: 1 00:27:05.003 Min: 1 00:27:05.003 Completion Queue Entry Size 00:27:05.003 Max: 1 00:27:05.003 Min: 1 00:27:05.003 Number of Namespaces: 0 00:27:05.003 Compare Command: Not Supported 00:27:05.003 Write Uncorrectable Command: Not Supported 00:27:05.003 Dataset Management Command: Not Supported 00:27:05.003 Write Zeroes Command: Not Supported 00:27:05.003 Set Features Save Field: Not Supported 00:27:05.003 Reservations: Not Supported 00:27:05.003 Timestamp: Not Supported 00:27:05.003 Copy: Not Supported 00:27:05.003 Volatile Write Cache: Not Present 00:27:05.003 Atomic Write Unit (Normal): 1 00:27:05.003 Atomic Write Unit (PFail): 1 00:27:05.003 Atomic Compare & Write Unit: 1 00:27:05.003 Fused Compare & Write: Not Supported 00:27:05.003 Scatter-Gather List 00:27:05.003 SGL Command Set: Supported 00:27:05.003 SGL Keyed: Not Supported 00:27:05.003 SGL Bit Bucket Descriptor: Not Supported 00:27:05.003 SGL Metadata Pointer: Not Supported 00:27:05.003 Oversized SGL: Not Supported 00:27:05.003 SGL Metadata Address: Not Supported 00:27:05.003 SGL Offset: Supported 00:27:05.003 Transport SGL Data Block: Not Supported 00:27:05.003 Replay Protected Memory Block: Not Supported 00:27:05.003 00:27:05.003 Firmware Slot Information 00:27:05.003 ========================= 00:27:05.003 Active slot: 0 00:27:05.003 00:27:05.003 00:27:05.003 Error Log 00:27:05.003 ========= 00:27:05.003 00:27:05.003 Active Namespaces 00:27:05.003 ================= 00:27:05.003 Discovery Log Page 00:27:05.003 ================== 00:27:05.003 Generation Counter: 2 00:27:05.003 Number of Records: 2 00:27:05.003 Record Format: 0 00:27:05.003 00:27:05.003 Discovery Log Entry 0 00:27:05.003 ---------------------- 00:27:05.003 Transport Type: 3 (TCP) 00:27:05.003 Address Family: 1 (IPv4) 00:27:05.003 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:05.003 Entry Flags: 00:27:05.003 Duplicate Returned Information: 0 00:27:05.003 Explicit Persistent Connection Support for Discovery: 0 00:27:05.003 Transport Requirements: 00:27:05.003 Secure Channel: Not Specified 00:27:05.003 Port ID: 1 (0x0001) 00:27:05.003 Controller ID: 65535 (0xffff) 00:27:05.003 Admin Max SQ Size: 32 00:27:05.003 Transport Service Identifier: 4420 00:27:05.003 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:05.003 Transport Address: 10.0.0.1 00:27:05.003 Discovery Log Entry 1 00:27:05.003 ---------------------- 00:27:05.003 Transport Type: 3 (TCP) 00:27:05.003 Address Family: 1 (IPv4) 00:27:05.003 Subsystem Type: 2 (NVM Subsystem) 00:27:05.003 Entry Flags: 00:27:05.003 Duplicate Returned Information: 0 00:27:05.003 Explicit Persistent Connection Support for Discovery: 0 00:27:05.003 Transport Requirements: 00:27:05.003 Secure Channel: Not Specified 00:27:05.003 Port ID: 1 (0x0001) 00:27:05.003 Controller ID: 65535 (0xffff) 00:27:05.003 Admin Max SQ Size: 32 00:27:05.003 Transport Service Identifier: 4420 00:27:05.003 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:05.003 Transport Address: 10.0.0.1 00:27:05.003 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:05.003 get_feature(0x01) failed 00:27:05.003 get_feature(0x02) failed 00:27:05.003 get_feature(0x04) failed 00:27:05.003 ===================================================== 00:27:05.003 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:05.003 ===================================================== 00:27:05.003 Controller Capabilities/Features 00:27:05.003 ================================ 00:27:05.003 Vendor ID: 0000 00:27:05.003 Subsystem Vendor ID: 0000 00:27:05.003 Serial Number: 0b8f44ff49f9e36d3ba4 00:27:05.003 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:05.003 Firmware Version: 6.8.9-20 00:27:05.003 Recommended Arb Burst: 6 00:27:05.003 IEEE OUI Identifier: 00 00 00 00:27:05.003 Multi-path I/O 00:27:05.003 May have multiple subsystem ports: Yes 00:27:05.003 May have multiple controllers: Yes 00:27:05.003 Associated with SR-IOV VF: No 00:27:05.003 Max Data Transfer Size: Unlimited 00:27:05.003 Max Number of Namespaces: 1024 00:27:05.003 Max Number of I/O Queues: 128 00:27:05.003 NVMe Specification Version (VS): 1.3 00:27:05.003 NVMe Specification Version (Identify): 1.3 00:27:05.003 Maximum Queue Entries: 1024 00:27:05.004 Contiguous Queues Required: No 00:27:05.004 Arbitration Mechanisms Supported 00:27:05.004 Weighted Round Robin: Not Supported 00:27:05.004 Vendor Specific: Not Supported 00:27:05.004 Reset Timeout: 7500 ms 00:27:05.004 Doorbell Stride: 4 bytes 00:27:05.004 NVM Subsystem Reset: Not Supported 00:27:05.004 Command Sets Supported 00:27:05.004 NVM Command Set: Supported 00:27:05.004 Boot Partition: Not Supported 00:27:05.004 Memory Page Size Minimum: 4096 bytes 00:27:05.004 Memory Page Size Maximum: 4096 bytes 00:27:05.004 Persistent Memory Region: Not Supported 00:27:05.004 Optional Asynchronous Events Supported 00:27:05.004 Namespace Attribute Notices: Supported 00:27:05.004 Firmware Activation Notices: Not Supported 00:27:05.004 ANA Change Notices: Supported 00:27:05.004 PLE Aggregate Log Change Notices: Not Supported 00:27:05.004 LBA Status Info Alert Notices: Not Supported 00:27:05.004 EGE Aggregate Log Change Notices: Not Supported 00:27:05.004 Normal NVM Subsystem Shutdown event: Not Supported 00:27:05.004 Zone Descriptor Change Notices: Not Supported 00:27:05.004 Discovery Log Change Notices: Not Supported 00:27:05.004 Controller Attributes 00:27:05.004 128-bit Host Identifier: Supported 00:27:05.004 Non-Operational Permissive Mode: Not Supported 00:27:05.004 NVM Sets: Not Supported 00:27:05.004 Read Recovery Levels: Not Supported 00:27:05.004 Endurance Groups: Not Supported 00:27:05.004 Predictable Latency Mode: Not Supported 00:27:05.004 Traffic Based Keep ALive: Supported 00:27:05.004 Namespace Granularity: Not Supported 00:27:05.004 SQ Associations: Not Supported 00:27:05.004 UUID List: Not Supported 00:27:05.004 Multi-Domain Subsystem: Not Supported 00:27:05.004 Fixed Capacity Management: Not Supported 00:27:05.004 Variable Capacity Management: Not Supported 00:27:05.004 Delete Endurance Group: Not Supported 00:27:05.004 Delete NVM Set: Not Supported 00:27:05.004 Extended LBA Formats Supported: Not Supported 00:27:05.004 Flexible Data Placement Supported: Not Supported 00:27:05.004 00:27:05.004 Controller Memory Buffer Support 00:27:05.004 ================================ 00:27:05.004 Supported: No 00:27:05.004 00:27:05.004 Persistent Memory Region Support 00:27:05.004 ================================ 00:27:05.004 Supported: No 00:27:05.004 00:27:05.004 Admin Command Set Attributes 00:27:05.004 ============================ 00:27:05.004 Security Send/Receive: Not Supported 00:27:05.004 Format NVM: Not Supported 00:27:05.004 Firmware Activate/Download: Not Supported 00:27:05.004 Namespace Management: Not Supported 00:27:05.004 Device Self-Test: Not Supported 00:27:05.004 Directives: Not Supported 00:27:05.004 NVMe-MI: Not Supported 00:27:05.004 Virtualization Management: Not Supported 00:27:05.004 Doorbell Buffer Config: Not Supported 00:27:05.004 Get LBA Status Capability: Not Supported 00:27:05.004 Command & Feature Lockdown Capability: Not Supported 00:27:05.004 Abort Command Limit: 4 00:27:05.004 Async Event Request Limit: 4 00:27:05.004 Number of Firmware Slots: N/A 00:27:05.004 Firmware Slot 1 Read-Only: N/A 00:27:05.004 Firmware Activation Without Reset: N/A 00:27:05.004 Multiple Update Detection Support: N/A 00:27:05.004 Firmware Update Granularity: No Information Provided 00:27:05.004 Per-Namespace SMART Log: Yes 00:27:05.004 Asymmetric Namespace Access Log Page: Supported 00:27:05.004 ANA Transition Time : 10 sec 00:27:05.004 00:27:05.004 Asymmetric Namespace Access Capabilities 00:27:05.004 ANA Optimized State : Supported 00:27:05.004 ANA Non-Optimized State : Supported 00:27:05.004 ANA Inaccessible State : Supported 00:27:05.004 ANA Persistent Loss State : Supported 00:27:05.004 ANA Change State : Supported 00:27:05.004 ANAGRPID is not changed : No 00:27:05.004 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:05.004 00:27:05.004 ANA Group Identifier Maximum : 128 00:27:05.004 Number of ANA Group Identifiers : 128 00:27:05.004 Max Number of Allowed Namespaces : 1024 00:27:05.004 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:05.004 Command Effects Log Page: Supported 00:27:05.004 Get Log Page Extended Data: Supported 00:27:05.004 Telemetry Log Pages: Not Supported 00:27:05.004 Persistent Event Log Pages: Not Supported 00:27:05.004 Supported Log Pages Log Page: May Support 00:27:05.004 Commands Supported & Effects Log Page: Not Supported 00:27:05.004 Feature Identifiers & Effects Log Page:May Support 00:27:05.004 NVMe-MI Commands & Effects Log Page: May Support 00:27:05.004 Data Area 4 for Telemetry Log: Not Supported 00:27:05.004 Error Log Page Entries Supported: 128 00:27:05.004 Keep Alive: Supported 00:27:05.004 Keep Alive Granularity: 1000 ms 00:27:05.004 00:27:05.004 NVM Command Set Attributes 00:27:05.004 ========================== 00:27:05.004 Submission Queue Entry Size 00:27:05.004 Max: 64 00:27:05.004 Min: 64 00:27:05.004 Completion Queue Entry Size 00:27:05.004 Max: 16 00:27:05.004 Min: 16 00:27:05.004 Number of Namespaces: 1024 00:27:05.004 Compare Command: Not Supported 00:27:05.004 Write Uncorrectable Command: Not Supported 00:27:05.004 Dataset Management Command: Supported 00:27:05.004 Write Zeroes Command: Supported 00:27:05.004 Set Features Save Field: Not Supported 00:27:05.004 Reservations: Not Supported 00:27:05.004 Timestamp: Not Supported 00:27:05.004 Copy: Not Supported 00:27:05.004 Volatile Write Cache: Present 00:27:05.004 Atomic Write Unit (Normal): 1 00:27:05.004 Atomic Write Unit (PFail): 1 00:27:05.004 Atomic Compare & Write Unit: 1 00:27:05.004 Fused Compare & Write: Not Supported 00:27:05.004 Scatter-Gather List 00:27:05.004 SGL Command Set: Supported 00:27:05.004 SGL Keyed: Not Supported 00:27:05.004 SGL Bit Bucket Descriptor: Not Supported 00:27:05.004 SGL Metadata Pointer: Not Supported 00:27:05.004 Oversized SGL: Not Supported 00:27:05.004 SGL Metadata Address: Not Supported 00:27:05.004 SGL Offset: Supported 00:27:05.004 Transport SGL Data Block: Not Supported 00:27:05.004 Replay Protected Memory Block: Not Supported 00:27:05.004 00:27:05.004 Firmware Slot Information 00:27:05.004 ========================= 00:27:05.004 Active slot: 0 00:27:05.004 00:27:05.004 Asymmetric Namespace Access 00:27:05.004 =========================== 00:27:05.004 Change Count : 0 00:27:05.004 Number of ANA Group Descriptors : 1 00:27:05.004 ANA Group Descriptor : 0 00:27:05.004 ANA Group ID : 1 00:27:05.004 Number of NSID Values : 1 00:27:05.004 Change Count : 0 00:27:05.004 ANA State : 1 00:27:05.004 Namespace Identifier : 1 00:27:05.004 00:27:05.004 Commands Supported and Effects 00:27:05.004 ============================== 00:27:05.004 Admin Commands 00:27:05.004 -------------- 00:27:05.004 Get Log Page (02h): Supported 00:27:05.004 Identify (06h): Supported 00:27:05.004 Abort (08h): Supported 00:27:05.004 Set Features (09h): Supported 00:27:05.004 Get Features (0Ah): Supported 00:27:05.004 Asynchronous Event Request (0Ch): Supported 00:27:05.004 Keep Alive (18h): Supported 00:27:05.004 I/O Commands 00:27:05.004 ------------ 00:27:05.004 Flush (00h): Supported 00:27:05.004 Write (01h): Supported LBA-Change 00:27:05.004 Read (02h): Supported 00:27:05.004 Write Zeroes (08h): Supported LBA-Change 00:27:05.004 Dataset Management (09h): Supported 00:27:05.004 00:27:05.004 Error Log 00:27:05.004 ========= 00:27:05.004 Entry: 0 00:27:05.004 Error Count: 0x3 00:27:05.004 Submission Queue Id: 0x0 00:27:05.004 Command Id: 0x5 00:27:05.004 Phase Bit: 0 00:27:05.004 Status Code: 0x2 00:27:05.004 Status Code Type: 0x0 00:27:05.004 Do Not Retry: 1 00:27:05.004 Error Location: 0x28 00:27:05.004 LBA: 0x0 00:27:05.004 Namespace: 0x0 00:27:05.004 Vendor Log Page: 0x0 00:27:05.004 ----------- 00:27:05.004 Entry: 1 00:27:05.004 Error Count: 0x2 00:27:05.004 Submission Queue Id: 0x0 00:27:05.004 Command Id: 0x5 00:27:05.004 Phase Bit: 0 00:27:05.004 Status Code: 0x2 00:27:05.004 Status Code Type: 0x0 00:27:05.004 Do Not Retry: 1 00:27:05.004 Error Location: 0x28 00:27:05.004 LBA: 0x0 00:27:05.004 Namespace: 0x0 00:27:05.004 Vendor Log Page: 0x0 00:27:05.004 ----------- 00:27:05.004 Entry: 2 00:27:05.004 Error Count: 0x1 00:27:05.004 Submission Queue Id: 0x0 00:27:05.004 Command Id: 0x4 00:27:05.004 Phase Bit: 0 00:27:05.004 Status Code: 0x2 00:27:05.004 Status Code Type: 0x0 00:27:05.004 Do Not Retry: 1 00:27:05.004 Error Location: 0x28 00:27:05.004 LBA: 0x0 00:27:05.004 Namespace: 0x0 00:27:05.004 Vendor Log Page: 0x0 00:27:05.004 00:27:05.004 Number of Queues 00:27:05.004 ================ 00:27:05.004 Number of I/O Submission Queues: 128 00:27:05.004 Number of I/O Completion Queues: 128 00:27:05.004 00:27:05.004 ZNS Specific Controller Data 00:27:05.005 ============================ 00:27:05.005 Zone Append Size Limit: 0 00:27:05.005 00:27:05.005 00:27:05.005 Active Namespaces 00:27:05.005 ================= 00:27:05.005 get_feature(0x05) failed 00:27:05.005 Namespace ID:1 00:27:05.005 Command Set Identifier: NVM (00h) 00:27:05.005 Deallocate: Supported 00:27:05.005 Deallocated/Unwritten Error: Not Supported 00:27:05.005 Deallocated Read Value: Unknown 00:27:05.005 Deallocate in Write Zeroes: Not Supported 00:27:05.005 Deallocated Guard Field: 0xFFFF 00:27:05.005 Flush: Supported 00:27:05.005 Reservation: Not Supported 00:27:05.005 Namespace Sharing Capabilities: Multiple Controllers 00:27:05.005 Size (in LBAs): 3750748848 (1788GiB) 00:27:05.005 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:05.005 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:05.005 UUID: 14996b31-c40c-4908-a4a2-6ad3551c4b4e 00:27:05.005 Thin Provisioning: Not Supported 00:27:05.005 Per-NS Atomic Units: Yes 00:27:05.005 Atomic Write Unit (Normal): 8 00:27:05.005 Atomic Write Unit (PFail): 8 00:27:05.005 Preferred Write Granularity: 8 00:27:05.005 Atomic Compare & Write Unit: 8 00:27:05.005 Atomic Boundary Size (Normal): 0 00:27:05.005 Atomic Boundary Size (PFail): 0 00:27:05.005 Atomic Boundary Offset: 0 00:27:05.005 NGUID/EUI64 Never Reused: No 00:27:05.005 ANA group ID: 1 00:27:05.005 Namespace Write Protected: No 00:27:05.005 Number of LBA Formats: 1 00:27:05.005 Current LBA Format: LBA Format #00 00:27:05.005 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:05.005 00:27:05.005 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:05.005 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.005 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:05.005 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.005 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:05.005 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.005 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.005 rmmod nvme_tcp 00:27:05.005 rmmod nvme_fabrics 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.265 13:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:07.173 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:07.433 13:51:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:10.732 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:10.733 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:10.733 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:10.733 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:10.733 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:10.733 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:10.733 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:10.733 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:10.733 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:10.993 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:10.993 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:10.993 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:10.993 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:10.993 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:10.993 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:10.993 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:10.993 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:11.253 00:27:11.253 real 0m18.972s 00:27:11.253 user 0m5.069s 00:27:11.253 sys 0m10.867s 00:27:11.253 13:51:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:11.253 13:51:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:11.253 ************************************ 00:27:11.253 END TEST nvmf_identify_kernel_target 00:27:11.253 ************************************ 00:27:11.253 13:51:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:11.253 13:51:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:11.253 13:51:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:11.253 13:51:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.515 ************************************ 00:27:11.515 START TEST nvmf_auth_host 00:27:11.515 ************************************ 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:11.515 * Looking for test storage... 00:27:11.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:11.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.515 --rc genhtml_branch_coverage=1 00:27:11.515 --rc genhtml_function_coverage=1 00:27:11.515 --rc genhtml_legend=1 00:27:11.515 --rc geninfo_all_blocks=1 00:27:11.515 --rc geninfo_unexecuted_blocks=1 00:27:11.515 00:27:11.515 ' 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:11.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.515 --rc genhtml_branch_coverage=1 00:27:11.515 --rc genhtml_function_coverage=1 00:27:11.515 --rc genhtml_legend=1 00:27:11.515 --rc geninfo_all_blocks=1 00:27:11.515 --rc geninfo_unexecuted_blocks=1 00:27:11.515 00:27:11.515 ' 00:27:11.515 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:11.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.515 --rc genhtml_branch_coverage=1 00:27:11.515 --rc genhtml_function_coverage=1 00:27:11.515 --rc genhtml_legend=1 00:27:11.516 --rc geninfo_all_blocks=1 00:27:11.516 --rc geninfo_unexecuted_blocks=1 00:27:11.516 00:27:11.516 ' 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:11.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.516 --rc genhtml_branch_coverage=1 00:27:11.516 --rc genhtml_function_coverage=1 00:27:11.516 --rc genhtml_legend=1 00:27:11.516 --rc geninfo_all_blocks=1 00:27:11.516 --rc geninfo_unexecuted_blocks=1 00:27:11.516 00:27:11.516 ' 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:11.516 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.777 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:19.924 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:19.924 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:19.924 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:19.924 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.924 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:19.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:27:19.925 00:27:19.925 --- 10.0.0.2 ping statistics --- 00:27:19.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.925 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:27:19.925 00:27:19.925 --- 10.0.0.1 ping statistics --- 00:27:19.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.925 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=795513 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 795513 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 795513 ']' 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:19.925 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=50a06dc37bef88474a55688a68e0c588 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7gy 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 50a06dc37bef88474a55688a68e0c588 0 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 50a06dc37bef88474a55688a68e0c588 0 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=50a06dc37bef88474a55688a68e0c588 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7gy 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7gy 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.7gy 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:19.925 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aa2bdb4c02e1ebf48fc4045d1f7db6f73849740aceb597946aa18560291d5749 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kfd 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aa2bdb4c02e1ebf48fc4045d1f7db6f73849740aceb597946aa18560291d5749 3 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aa2bdb4c02e1ebf48fc4045d1f7db6f73849740aceb597946aa18560291d5749 3 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aa2bdb4c02e1ebf48fc4045d1f7db6f73849740aceb597946aa18560291d5749 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kfd 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kfd 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.kfd 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ad002acf0fd82ffd2e4ea0d091dad53eb9d40500a1b5bf78 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YLM 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ad002acf0fd82ffd2e4ea0d091dad53eb9d40500a1b5bf78 0 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ad002acf0fd82ffd2e4ea0d091dad53eb9d40500a1b5bf78 0 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:20.186 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ad002acf0fd82ffd2e4ea0d091dad53eb9d40500a1b5bf78 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YLM 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YLM 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.YLM 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=452f39e91ea1506fe13e5ddc3c8f7976c5492394fde6885a 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.84c 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 452f39e91ea1506fe13e5ddc3c8f7976c5492394fde6885a 2 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 452f39e91ea1506fe13e5ddc3c8f7976c5492394fde6885a 2 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=452f39e91ea1506fe13e5ddc3c8f7976c5492394fde6885a 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.84c 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.84c 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.84c 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7ca46b575c93ba446d92bb4cbfa091c1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xJE 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7ca46b575c93ba446d92bb4cbfa091c1 1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7ca46b575c93ba446d92bb4cbfa091c1 1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7ca46b575c93ba446d92bb4cbfa091c1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xJE 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xJE 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xJE 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=245d10d23a8f19bed9535b62002813a6 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qdc 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 245d10d23a8f19bed9535b62002813a6 1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 245d10d23a8f19bed9535b62002813a6 1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=245d10d23a8f19bed9535b62002813a6 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:20.187 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qdc 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qdc 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qdc 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b91f6ad038bb159f48fe18fbbede0002ad8d19178b1322ce 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4BE 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b91f6ad038bb159f48fe18fbbede0002ad8d19178b1322ce 2 00:27:20.448 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b91f6ad038bb159f48fe18fbbede0002ad8d19178b1322ce 2 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b91f6ad038bb159f48fe18fbbede0002ad8d19178b1322ce 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4BE 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4BE 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4BE 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=44da3be8672c8b834b83457c9824a7c2 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.auC 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 44da3be8672c8b834b83457c9824a7c2 0 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 44da3be8672c8b834b83457c9824a7c2 0 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=44da3be8672c8b834b83457c9824a7c2 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.auC 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.auC 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.auC 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d503cc12415cf9f8a8f6aa4367f94b2fa143affe45a461e3f1994b40a28c46c3 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Qdk 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d503cc12415cf9f8a8f6aa4367f94b2fa143affe45a461e3f1994b40a28c46c3 3 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d503cc12415cf9f8a8f6aa4367f94b2fa143affe45a461e3f1994b40a28c46c3 3 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d503cc12415cf9f8a8f6aa4367f94b2fa143affe45a461e3f1994b40a28c46c3 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Qdk 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Qdk 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Qdk 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 795513 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 795513 ']' 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:20.449 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7gy 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.kfd ]] 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kfd 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.710 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.711 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:20.711 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YLM 00:27:20.711 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.711 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 13:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.84c ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.84c 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xJE 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qdc ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qdc 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4BE 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.auC ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.auC 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Qdk 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:20.711 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:20.972 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:20.972 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:24.274 Waiting for block devices as requested 00:27:24.274 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:24.274 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:24.274 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:24.274 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:24.535 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:24.535 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:24.535 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:24.796 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:24.796 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:25.056 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:25.056 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:25.056 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:25.056 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:25.318 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:25.318 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:25.318 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:25.578 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:26.520 No valid GPT data, bailing 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:26.520 00:27:26.520 Discovery Log Number of Records 2, Generation counter 2 00:27:26.520 =====Discovery Log Entry 0====== 00:27:26.520 trtype: tcp 00:27:26.520 adrfam: ipv4 00:27:26.520 subtype: current discovery subsystem 00:27:26.520 treq: not specified, sq flow control disable supported 00:27:26.520 portid: 1 00:27:26.520 trsvcid: 4420 00:27:26.520 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:26.520 traddr: 10.0.0.1 00:27:26.520 eflags: none 00:27:26.520 sectype: none 00:27:26.520 =====Discovery Log Entry 1====== 00:27:26.520 trtype: tcp 00:27:26.520 adrfam: ipv4 00:27:26.520 subtype: nvme subsystem 00:27:26.520 treq: not specified, sq flow control disable supported 00:27:26.520 portid: 1 00:27:26.520 trsvcid: 4420 00:27:26.520 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:26.520 traddr: 10.0.0.1 00:27:26.520 eflags: none 00:27:26.520 sectype: none 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.520 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.521 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.521 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.521 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.781 nvme0n1 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.781 13:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.781 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.782 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.043 nvme0n1 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.043 nvme0n1 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.043 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.304 nvme0n1 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.304 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.565 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.565 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.565 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.565 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.565 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.565 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.565 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.566 nvme0n1 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.566 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.827 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.828 13:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.828 nvme0n1 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.828 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.089 nvme0n1 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.089 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.350 nvme0n1 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.350 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.612 nvme0n1 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.612 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.873 13:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.873 nvme0n1 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.873 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.134 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.134 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.134 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:29.134 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.134 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.134 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.134 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.134 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.135 nvme0n1 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.135 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.396 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.656 nvme0n1 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.656 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.657 13:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.917 nvme0n1 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.917 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.918 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.179 nvme0n1 00:27:30.179 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.179 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.179 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.179 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.179 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.179 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.440 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.441 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 nvme0n1 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.702 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.703 13:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 nvme0n1 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.964 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.965 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.965 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.965 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.965 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.965 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.965 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.965 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.965 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.536 nvme0n1 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.536 13:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.108 nvme0n1 00:27:32.108 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.108 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.108 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.109 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.681 nvme0n1 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.681 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.682 13:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.253 nvme0n1 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.253 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.826 nvme0n1 00:27:33.826 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.826 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.826 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.826 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.826 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.826 13:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.826 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.768 nvme0n1 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:34.768 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.769 13:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.340 nvme0n1 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.340 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.602 13:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.174 nvme0n1 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.174 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.434 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.005 nvme0n1 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:37.005 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.006 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:37.006 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.006 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.006 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.006 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.267 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.838 nvme0n1 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.838 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.839 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.100 nvme0n1 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.100 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.101 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.362 nvme0n1 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.362 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.624 nvme0n1 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.624 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.625 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.885 nvme0n1 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.885 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.886 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.146 nvme0n1 00:27:39.146 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.146 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.146 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.146 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.147 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.407 nvme0n1 00:27:39.407 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.407 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.408 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.669 nvme0n1 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.669 13:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.931 nvme0n1 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.931 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.192 nvme0n1 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.192 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.193 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.453 nvme0n1 00:27:40.453 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.453 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.454 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.715 nvme0n1 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.715 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.976 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.237 nvme0n1 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.237 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.238 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.499 nvme0n1 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.499 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.500 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.760 nvme0n1 00:27:41.760 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.760 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.760 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.760 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.760 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.760 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.021 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.282 nvme0n1 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.282 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.852 nvme0n1 00:27:42.852 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.852 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.852 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.852 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.852 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.852 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.853 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.853 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.853 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.853 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.853 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.853 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.853 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.853 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.853 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.424 nvme0n1 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.424 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.997 nvme0n1 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.997 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.258 nvme0n1 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:44.258 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.259 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.259 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.519 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.780 nvme0n1 00:27:44.780 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.780 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.780 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.780 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.780 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.780 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.047 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.048 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.689 nvme0n1 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.689 13:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.689 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.718 nvme0n1 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.718 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.719 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.291 nvme0n1 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.291 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.552 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.156 nvme0n1 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.156 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.157 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.157 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.157 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.157 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.417 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.417 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.417 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.417 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.417 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.418 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.989 nvme0n1 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.989 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.249 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.250 nvme0n1 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.250 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.510 nvme0n1 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.510 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.511 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.511 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.511 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.511 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.772 nvme0n1 00:27:49.772 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.772 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.772 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.772 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.772 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.772 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.032 nvme0n1 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.032 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.033 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.293 nvme0n1 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.293 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.554 nvme0n1 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.554 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.815 nvme0n1 00:27:50.815 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.815 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.815 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.815 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.075 nvme0n1 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.075 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.336 nvme0n1 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:51.336 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.337 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.597 nvme0n1 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.597 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.859 nvme0n1 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.859 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.120 nvme0n1 00:27:52.120 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.120 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.120 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.120 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.120 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.380 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 nvme0n1 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.641 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.901 nvme0n1 00:27:52.901 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.901 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.901 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.901 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.901 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.901 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.901 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.902 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.162 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 nvme0n1 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.422 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.423 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.993 nvme0n1 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.993 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.994 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.994 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.994 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.994 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.994 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.994 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.994 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.564 nvme0n1 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.564 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.134 nvme0n1 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.134 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.135 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.395 nvme0n1 00:27:55.395 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.655 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.228 nvme0n1 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTBhMDZkYzM3YmVmODg0NzRhNTU2ODhhNjhlMGM1ODjTOOSq: 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: ]] 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWEyYmRiNGMwMmUxZWJmNDhmYzQwNDVkMWY3ZGI2ZjczODQ5NzQwYWNlYjU5Nzk0NmFhMTg1NjAyOTFkNTc0OY/Hv54=: 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.228 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.229 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.798 nvme0n1 00:27:56.798 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.798 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.798 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.798 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.798 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.798 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.058 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.629 nvme0n1 00:27:57.629 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.629 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.629 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.629 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.629 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.629 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.890 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.891 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.461 nvme0n1 00:27:58.462 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.462 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.462 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.462 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.462 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.462 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjkxZjZhZDAzOGJiMTU5ZjQ4ZmUxOGZiYmVkZTAwMDJhZDhkMTkxNzhiMTMyMmNlT/n1ww==: 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: ]] 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDRkYTNiZTg2NzJjOGI4MzRiODM0NTdjOTgyNGE3YzJfXGyp: 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.722 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.723 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.723 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.723 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.723 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.723 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.723 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.723 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.723 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.723 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.293 nvme0n1 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwM2NjMTI0MTVjZjlmOGE4ZjZhYTQzNjdmOTRiMmZhMTQzYWZmZTQ1YTQ2MWUzZjE5OTRiNDBhMjhjNDZjM9t80EA=: 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.293 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.553 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.124 nvme0n1 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.124 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.386 request: 00:28:00.386 { 00:28:00.386 "name": "nvme0", 00:28:00.386 "trtype": "tcp", 00:28:00.386 "traddr": "10.0.0.1", 00:28:00.386 "adrfam": "ipv4", 00:28:00.386 "trsvcid": "4420", 00:28:00.386 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:00.386 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:00.386 "prchk_reftag": false, 00:28:00.386 "prchk_guard": false, 00:28:00.386 "hdgst": false, 00:28:00.386 "ddgst": false, 00:28:00.386 "allow_unrecognized_csi": false, 00:28:00.386 "method": "bdev_nvme_attach_controller", 00:28:00.386 "req_id": 1 00:28:00.386 } 00:28:00.386 Got JSON-RPC error response 00:28:00.386 response: 00:28:00.386 { 00:28:00.386 "code": -5, 00:28:00.386 "message": "Input/output error" 00:28:00.386 } 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.386 request: 00:28:00.386 { 00:28:00.386 "name": "nvme0", 00:28:00.386 "trtype": "tcp", 00:28:00.386 "traddr": "10.0.0.1", 00:28:00.386 "adrfam": "ipv4", 00:28:00.386 "trsvcid": "4420", 00:28:00.386 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:00.386 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:00.386 "prchk_reftag": false, 00:28:00.386 "prchk_guard": false, 00:28:00.386 "hdgst": false, 00:28:00.386 "ddgst": false, 00:28:00.386 "dhchap_key": "key2", 00:28:00.386 "allow_unrecognized_csi": false, 00:28:00.386 "method": "bdev_nvme_attach_controller", 00:28:00.386 "req_id": 1 00:28:00.386 } 00:28:00.386 Got JSON-RPC error response 00:28:00.386 response: 00:28:00.386 { 00:28:00.386 "code": -5, 00:28:00.386 "message": "Input/output error" 00:28:00.386 } 00:28:00.386 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:00.387 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.647 request: 00:28:00.647 { 00:28:00.647 "name": "nvme0", 00:28:00.647 "trtype": "tcp", 00:28:00.647 "traddr": "10.0.0.1", 00:28:00.647 "adrfam": "ipv4", 00:28:00.647 "trsvcid": "4420", 00:28:00.647 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:00.647 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:00.647 "prchk_reftag": false, 00:28:00.647 "prchk_guard": false, 00:28:00.647 "hdgst": false, 00:28:00.647 "ddgst": false, 00:28:00.647 "dhchap_key": "key1", 00:28:00.647 "dhchap_ctrlr_key": "ckey2", 00:28:00.647 "allow_unrecognized_csi": false, 00:28:00.647 "method": "bdev_nvme_attach_controller", 00:28:00.647 "req_id": 1 00:28:00.647 } 00:28:00.647 Got JSON-RPC error response 00:28:00.647 response: 00:28:00.647 { 00:28:00.647 "code": -5, 00:28:00.647 "message": "Input/output error" 00:28:00.647 } 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.647 nvme0n1 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:28:00.647 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.648 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.648 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:28:00.648 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:28:00.648 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:28:00.648 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.648 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.648 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.908 request: 00:28:00.908 { 00:28:00.908 "name": "nvme0", 00:28:00.908 "dhchap_key": "key1", 00:28:00.908 "dhchap_ctrlr_key": "ckey2", 00:28:00.908 "method": "bdev_nvme_set_keys", 00:28:00.908 "req_id": 1 00:28:00.908 } 00:28:00.908 Got JSON-RPC error response 00:28:00.908 response: 00:28:00.908 { 00:28:00.908 "code": -13, 00:28:00.908 "message": "Permission denied" 00:28:00.908 } 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:00.908 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:01.848 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.109 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:02.109 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.109 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.109 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.109 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:02.109 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQwMDJhY2YwZmQ4MmZmZDJlNGVhMGQwOTFkYWQ1M2ViOWQ0MDUwMGExYjViZjc40DvDpQ==: 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: ]] 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUyZjM5ZTkxZWExNTA2ZmUxM2U1ZGRjM2M4Zjc5NzZjNTQ5MjM5NGZkZTY4ODVh9HD4UA==: 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.052 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.313 nvme0n1 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2NhNDZiNTc1YzkzYmE0NDZkOTJiYjRjYmZhMDkxYzHUwwRO: 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: ]] 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjQ1ZDEwZDIzYThmMTliZWQ5NTM1YjYyMDAyODEzYTacgjH6: 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.313 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.314 request: 00:28:03.314 { 00:28:03.314 "name": "nvme0", 00:28:03.314 "dhchap_key": "key2", 00:28:03.314 "dhchap_ctrlr_key": "ckey1", 00:28:03.314 "method": "bdev_nvme_set_keys", 00:28:03.314 "req_id": 1 00:28:03.314 } 00:28:03.314 Got JSON-RPC error response 00:28:03.314 response: 00:28:03.314 { 00:28:03.314 "code": -13, 00:28:03.314 "message": "Permission denied" 00:28:03.314 } 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:03.314 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.256 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.256 rmmod nvme_tcp 00:28:04.517 rmmod nvme_fabrics 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 795513 ']' 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 795513 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 795513 ']' 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 795513 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 795513 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 795513' 00:28:04.517 killing process with pid 795513 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 795513 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 795513 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.517 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:07.060 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:07.060 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:10.380 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:10.380 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:10.380 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:10.380 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:10.380 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:10.380 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:10.381 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:10.951 13:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.7gy /tmp/spdk.key-null.YLM /tmp/spdk.key-sha256.xJE /tmp/spdk.key-sha384.4BE /tmp/spdk.key-sha512.Qdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:10.951 13:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:14.252 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:14.252 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:14.252 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:14.512 00:28:14.512 real 1m3.031s 00:28:14.512 user 0m56.623s 00:28:14.512 sys 0m15.670s 00:28:14.512 13:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:14.512 13:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.512 ************************************ 00:28:14.512 END TEST nvmf_auth_host 00:28:14.512 ************************************ 00:28:14.512 13:52:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:14.512 13:52:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:14.512 13:52:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:14.512 13:52:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:14.512 13:52:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.512 ************************************ 00:28:14.512 START TEST nvmf_digest 00:28:14.512 ************************************ 00:28:14.513 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:14.513 * Looking for test storage... 00:28:14.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.513 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:14.513 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:14.513 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:14.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.774 --rc genhtml_branch_coverage=1 00:28:14.774 --rc genhtml_function_coverage=1 00:28:14.774 --rc genhtml_legend=1 00:28:14.774 --rc geninfo_all_blocks=1 00:28:14.774 --rc geninfo_unexecuted_blocks=1 00:28:14.774 00:28:14.774 ' 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:14.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.774 --rc genhtml_branch_coverage=1 00:28:14.774 --rc genhtml_function_coverage=1 00:28:14.774 --rc genhtml_legend=1 00:28:14.774 --rc geninfo_all_blocks=1 00:28:14.774 --rc geninfo_unexecuted_blocks=1 00:28:14.774 00:28:14.774 ' 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:14.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.774 --rc genhtml_branch_coverage=1 00:28:14.774 --rc genhtml_function_coverage=1 00:28:14.774 --rc genhtml_legend=1 00:28:14.774 --rc geninfo_all_blocks=1 00:28:14.774 --rc geninfo_unexecuted_blocks=1 00:28:14.774 00:28:14.774 ' 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:14.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.774 --rc genhtml_branch_coverage=1 00:28:14.774 --rc genhtml_function_coverage=1 00:28:14.774 --rc genhtml_legend=1 00:28:14.774 --rc geninfo_all_blocks=1 00:28:14.774 --rc geninfo_unexecuted_blocks=1 00:28:14.774 00:28:14.774 ' 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.774 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:14.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:14.775 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:22.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:22.916 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.916 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:22.917 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:22.917 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:28:22.917 00:28:22.917 --- 10.0.0.2 ping statistics --- 00:28:22.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.917 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:28:22.917 00:28:22.917 --- 10.0.0.1 ping statistics --- 00:28:22.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.917 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.917 ************************************ 00:28:22.917 START TEST nvmf_digest_clean 00:28:22.917 ************************************ 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=812835 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 812835 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 812835 ']' 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:22.917 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.917 [2024-11-06 13:52:45.561533] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:22.917 [2024-11-06 13:52:45.561598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.917 [2024-11-06 13:52:45.645236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.917 [2024-11-06 13:52:45.687157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.917 [2024-11-06 13:52:45.687197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.917 [2024-11-06 13:52:45.687207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.917 [2024-11-06 13:52:45.687215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.917 [2024-11-06 13:52:45.687223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.917 [2024-11-06 13:52:45.687885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.178 null0 00:28:23.178 [2024-11-06 13:52:46.472301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.178 [2024-11-06 13:52:46.496503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:23.178 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=813177 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 813177 /var/tmp/bperf.sock 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 813177 ']' 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:23.179 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.179 [2024-11-06 13:52:46.546402] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:23.179 [2024-11-06 13:52:46.546449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813177 ] 00:28:23.439 [2024-11-06 13:52:46.633304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.440 [2024-11-06 13:52:46.669267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.011 13:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:24.011 13:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:24.011 13:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:24.011 13:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:24.011 13:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:24.271 13:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.271 13:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.842 nvme0n1 00:28:24.842 13:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:24.842 13:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.842 Running I/O for 2 seconds... 00:28:26.723 19740.00 IOPS, 77.11 MiB/s [2024-11-06T12:52:50.099Z] 20037.00 IOPS, 78.27 MiB/s 00:28:26.723 Latency(us) 00:28:26.723 [2024-11-06T12:52:50.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.723 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:26.723 nvme0n1 : 2.00 20071.62 78.40 0.00 0.00 6371.58 2498.56 13926.40 00:28:26.723 [2024-11-06T12:52:50.099Z] =================================================================================================================== 00:28:26.723 [2024-11-06T12:52:50.099Z] Total : 20071.62 78.40 0.00 0.00 6371.58 2498.56 13926.40 00:28:26.723 { 00:28:26.723 "results": [ 00:28:26.723 { 00:28:26.723 "job": "nvme0n1", 00:28:26.723 "core_mask": "0x2", 00:28:26.723 "workload": "randread", 00:28:26.723 "status": "finished", 00:28:26.723 "queue_depth": 128, 00:28:26.723 "io_size": 4096, 00:28:26.723 "runtime": 2.002928, 00:28:26.723 "iops": 20071.615155412477, 00:28:26.723 "mibps": 78.40474670082999, 00:28:26.723 "io_failed": 0, 00:28:26.723 "io_timeout": 0, 00:28:26.723 "avg_latency_us": 6371.5754798268745, 00:28:26.723 "min_latency_us": 2498.56, 00:28:26.723 "max_latency_us": 13926.4 00:28:26.723 } 00:28:26.723 ], 00:28:26.723 "core_count": 1 00:28:26.723 } 00:28:26.982 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:26.982 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:26.982 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:26.983 | select(.opcode=="crc32c") 00:28:26.983 | "\(.module_name) \(.executed)"' 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 813177 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 813177 ']' 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 813177 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:26.983 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 813177 00:28:27.242 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:27.242 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:27.242 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 813177' 00:28:27.242 killing process with pid 813177 00:28:27.242 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 813177 00:28:27.242 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.242 00:28:27.242 Latency(us) 00:28:27.242 [2024-11-06T12:52:50.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.242 [2024-11-06T12:52:50.618Z] =================================================================================================================== 00:28:27.242 [2024-11-06T12:52:50.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 813177 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=813868 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 813868 /var/tmp/bperf.sock 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 813868 ']' 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:27.243 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.243 [2024-11-06 13:52:50.511112] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:27.243 [2024-11-06 13:52:50.511168] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813868 ] 00:28:27.243 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.243 Zero copy mechanism will not be used. 00:28:27.243 [2024-11-06 13:52:50.594632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.502 [2024-11-06 13:52:50.624057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.072 13:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:28.072 13:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:28.072 13:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:28.072 13:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:28.072 13:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:28.331 13:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.331 13:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.590 nvme0n1 00:28:28.590 13:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:28.590 13:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.849 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.849 Zero copy mechanism will not be used. 00:28:28.849 Running I/O for 2 seconds... 00:28:30.729 3747.00 IOPS, 468.38 MiB/s [2024-11-06T12:52:54.105Z] 3641.50 IOPS, 455.19 MiB/s 00:28:30.729 Latency(us) 00:28:30.729 [2024-11-06T12:52:54.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.729 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:30.729 nvme0n1 : 2.00 3643.28 455.41 0.00 0.00 4388.23 805.55 15291.73 00:28:30.729 [2024-11-06T12:52:54.105Z] =================================================================================================================== 00:28:30.729 [2024-11-06T12:52:54.105Z] Total : 3643.28 455.41 0.00 0.00 4388.23 805.55 15291.73 00:28:30.729 { 00:28:30.729 "results": [ 00:28:30.729 { 00:28:30.729 "job": "nvme0n1", 00:28:30.729 "core_mask": "0x2", 00:28:30.729 "workload": "randread", 00:28:30.729 "status": "finished", 00:28:30.729 "queue_depth": 16, 00:28:30.729 "io_size": 131072, 00:28:30.729 "runtime": 2.003414, 00:28:30.729 "iops": 3643.280919470464, 00:28:30.729 "mibps": 455.410114933808, 00:28:30.729 "io_failed": 0, 00:28:30.729 "io_timeout": 0, 00:28:30.729 "avg_latency_us": 4388.23258711239, 00:28:30.729 "min_latency_us": 805.5466666666666, 00:28:30.729 "max_latency_us": 15291.733333333334 00:28:30.729 } 00:28:30.729 ], 00:28:30.729 "core_count": 1 00:28:30.729 } 00:28:30.729 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:30.729 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:30.729 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:30.729 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:30.729 | select(.opcode=="crc32c") 00:28:30.729 | "\(.module_name) \(.executed)"' 00:28:30.729 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 813868 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 813868 ']' 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 813868 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 813868 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 813868' 00:28:30.988 killing process with pid 813868 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 813868 00:28:30.988 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.988 00:28:30.988 Latency(us) 00:28:30.988 [2024-11-06T12:52:54.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.988 [2024-11-06T12:52:54.364Z] =================================================================================================================== 00:28:30.988 [2024-11-06T12:52:54.364Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.988 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 813868 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=814560 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 814560 /var/tmp/bperf.sock 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 814560 ']' 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:31.248 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.248 [2024-11-06 13:52:54.453363] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:31.248 [2024-11-06 13:52:54.453421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814560 ] 00:28:31.248 [2024-11-06 13:52:54.536163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.248 [2024-11-06 13:52:54.565721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.188 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:32.188 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:32.188 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:32.188 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:32.188 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.188 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.188 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.448 nvme0n1 00:28:32.448 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:32.448 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.707 Running I/O for 2 seconds... 00:28:34.589 21584.00 IOPS, 84.31 MiB/s [2024-11-06T12:52:57.965Z] 21681.50 IOPS, 84.69 MiB/s 00:28:34.589 Latency(us) 00:28:34.589 [2024-11-06T12:52:57.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.589 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:34.589 nvme0n1 : 2.01 21704.61 84.78 0.00 0.00 5890.12 2266.45 13489.49 00:28:34.589 [2024-11-06T12:52:57.965Z] =================================================================================================================== 00:28:34.589 [2024-11-06T12:52:57.965Z] Total : 21704.61 84.78 0.00 0.00 5890.12 2266.45 13489.49 00:28:34.589 { 00:28:34.589 "results": [ 00:28:34.589 { 00:28:34.589 "job": "nvme0n1", 00:28:34.589 "core_mask": "0x2", 00:28:34.589 "workload": "randwrite", 00:28:34.589 "status": "finished", 00:28:34.589 "queue_depth": 128, 00:28:34.589 "io_size": 4096, 00:28:34.589 "runtime": 2.005979, 00:28:34.589 "iops": 21704.614056278755, 00:28:34.589 "mibps": 84.78364865733889, 00:28:34.589 "io_failed": 0, 00:28:34.589 "io_timeout": 0, 00:28:34.589 "avg_latency_us": 5890.124327461203, 00:28:34.589 "min_latency_us": 2266.4533333333334, 00:28:34.589 "max_latency_us": 13489.493333333334 00:28:34.589 } 00:28:34.589 ], 00:28:34.589 "core_count": 1 00:28:34.589 } 00:28:34.589 13:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:34.589 13:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:34.589 13:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:34.589 13:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:34.589 | select(.opcode=="crc32c") 00:28:34.589 | "\(.module_name) \(.executed)"' 00:28:34.589 13:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 814560 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 814560 ']' 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 814560 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 814560 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 814560' 00:28:34.851 killing process with pid 814560 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 814560 00:28:34.851 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.851 00:28:34.851 Latency(us) 00:28:34.851 [2024-11-06T12:52:58.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.851 [2024-11-06T12:52:58.227Z] =================================================================================================================== 00:28:34.851 [2024-11-06T12:52:58.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 814560 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=815351 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 815351 /var/tmp/bperf.sock 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 815351 ']' 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:34.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:34.851 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:35.112 [2024-11-06 13:52:58.266139] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:35.112 [2024-11-06 13:52:58.266197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815351 ] 00:28:35.112 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.112 Zero copy mechanism will not be used. 00:28:35.112 [2024-11-06 13:52:58.348006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.112 [2024-11-06 13:52:58.377510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.683 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:35.683 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:35.683 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:35.683 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:35.683 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:35.943 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.943 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.204 nvme0n1 00:28:36.204 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:36.204 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:36.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.467 Zero copy mechanism will not be used. 00:28:36.467 Running I/O for 2 seconds... 00:28:38.348 4714.00 IOPS, 589.25 MiB/s [2024-11-06T12:53:01.724Z] 4407.00 IOPS, 550.88 MiB/s 00:28:38.348 Latency(us) 00:28:38.348 [2024-11-06T12:53:01.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.348 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:38.348 nvme0n1 : 2.01 4404.27 550.53 0.00 0.00 3627.04 1576.96 13434.88 00:28:38.348 [2024-11-06T12:53:01.724Z] =================================================================================================================== 00:28:38.348 [2024-11-06T12:53:01.724Z] Total : 4404.27 550.53 0.00 0.00 3627.04 1576.96 13434.88 00:28:38.348 { 00:28:38.348 "results": [ 00:28:38.348 { 00:28:38.348 "job": "nvme0n1", 00:28:38.348 "core_mask": "0x2", 00:28:38.348 "workload": "randwrite", 00:28:38.348 "status": "finished", 00:28:38.348 "queue_depth": 16, 00:28:38.348 "io_size": 131072, 00:28:38.348 "runtime": 2.005555, 00:28:38.348 "iops": 4404.26714799644, 00:28:38.348 "mibps": 550.533393499555, 00:28:38.348 "io_failed": 0, 00:28:38.348 "io_timeout": 0, 00:28:38.348 "avg_latency_us": 3627.038605230386, 00:28:38.348 "min_latency_us": 1576.96, 00:28:38.348 "max_latency_us": 13434.88 00:28:38.348 } 00:28:38.348 ], 00:28:38.348 "core_count": 1 00:28:38.348 } 00:28:38.348 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:38.348 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:38.348 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:38.348 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:38.348 | select(.opcode=="crc32c") 00:28:38.348 | "\(.module_name) \(.executed)"' 00:28:38.348 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 815351 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 815351 ']' 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 815351 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 815351 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 815351' 00:28:38.608 killing process with pid 815351 00:28:38.608 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 815351 00:28:38.608 Received shutdown signal, test time was about 2.000000 seconds 00:28:38.608 00:28:38.608 Latency(us) 00:28:38.608 [2024-11-06T12:53:01.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.608 [2024-11-06T12:53:01.984Z] =================================================================================================================== 00:28:38.608 [2024-11-06T12:53:01.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.609 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 815351 00:28:38.869 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 812835 00:28:38.869 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 812835 ']' 00:28:38.869 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 812835 00:28:38.869 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:38.869 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:38.869 13:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 812835 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 812835' 00:28:38.869 killing process with pid 812835 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 812835 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 812835 00:28:38.869 00:28:38.869 real 0m16.684s 00:28:38.869 user 0m33.118s 00:28:38.869 sys 0m3.554s 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.869 ************************************ 00:28:38.869 END TEST nvmf_digest_clean 00:28:38.869 ************************************ 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:38.869 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.129 ************************************ 00:28:39.129 START TEST nvmf_digest_error 00:28:39.129 ************************************ 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=816266 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 816266 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 816266 ']' 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:39.129 13:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.129 [2024-11-06 13:53:02.321072] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:39.129 [2024-11-06 13:53:02.321158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.129 [2024-11-06 13:53:02.402505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.129 [2024-11-06 13:53:02.437846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.129 [2024-11-06 13:53:02.437877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.129 [2024-11-06 13:53:02.437884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.129 [2024-11-06 13:53:02.437891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.129 [2024-11-06 13:53:02.437897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.129 [2024-11-06 13:53:02.438433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.069 [2024-11-06 13:53:03.148457] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.069 null0 00:28:40.069 [2024-11-06 13:53:03.230923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.069 [2024-11-06 13:53:03.255135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=816315 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 816315 /var/tmp/bperf.sock 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 816315 ']' 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:40.069 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.069 [2024-11-06 13:53:03.310159] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:40.069 [2024-11-06 13:53:03.310207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816315 ] 00:28:40.069 [2024-11-06 13:53:03.392762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.069 [2024-11-06 13:53:03.422737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.009 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.269 nvme0n1 00:28:41.269 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:41.269 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.269 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.269 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.269 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:41.269 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.269 Running I/O for 2 seconds... 00:28:41.531 [2024-11-06 13:53:04.651879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.651909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.651919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.665900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.665919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.665926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.679830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.679847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.689686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.689703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.689710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.704414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.704432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.704438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.717896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.717913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.717920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.728376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.728393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.728399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.741116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.741134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.741140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.753044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.753061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.753069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.765399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.765416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.765423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.778649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.778666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.778673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.791751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.791768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.531 [2024-11-06 13:53:04.791774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.531 [2024-11-06 13:53:04.804403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.531 [2024-11-06 13:53:04.804421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.532 [2024-11-06 13:53:04.804428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.532 [2024-11-06 13:53:04.817145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.532 [2024-11-06 13:53:04.817161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.532 [2024-11-06 13:53:04.817171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.532 [2024-11-06 13:53:04.827983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.532 [2024-11-06 13:53:04.828000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.532 [2024-11-06 13:53:04.828006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.532 [2024-11-06 13:53:04.841229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.532 [2024-11-06 13:53:04.841246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.532 [2024-11-06 13:53:04.841253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.532 [2024-11-06 13:53:04.854288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.532 [2024-11-06 13:53:04.854305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.532 [2024-11-06 13:53:04.854311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.532 [2024-11-06 13:53:04.866564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.532 [2024-11-06 13:53:04.866580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.532 [2024-11-06 13:53:04.866587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.532 [2024-11-06 13:53:04.879617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.532 [2024-11-06 13:53:04.879635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.532 [2024-11-06 13:53:04.879643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.532 [2024-11-06 13:53:04.889995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.532 [2024-11-06 13:53:04.890011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.532 [2024-11-06 13:53:04.890017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.532 [2024-11-06 13:53:04.903568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.532 [2024-11-06 13:53:04.903585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.532 [2024-11-06 13:53:04.903592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:04.916523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:04.916540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:04.916547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:04.930105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:04.930125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:04.930131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:04.942157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:04.942174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:04.942181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:04.953410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:04.953427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:04.953433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:04.965851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:04.965868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:04.965874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:04.978109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:04.978126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:04.978133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:04.990533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:04.990550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:04.990556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.003891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.003909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.003915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.016782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.016799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.016806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.029656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.029673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.029679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.042147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.042164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.042171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.053925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.053942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.053949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.067477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.067494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.067501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.078779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.078796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.078802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.090606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.090624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.090630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.104188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.104206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.104212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.116917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.116934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.116941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.130084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.793 [2024-11-06 13:53:05.130102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.793 [2024-11-06 13:53:05.130108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.793 [2024-11-06 13:53:05.141998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.794 [2024-11-06 13:53:05.142015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.794 [2024-11-06 13:53:05.142026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.794 [2024-11-06 13:53:05.154853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.794 [2024-11-06 13:53:05.154869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.794 [2024-11-06 13:53:05.154876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.794 [2024-11-06 13:53:05.166354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:41.794 [2024-11-06 13:53:05.166371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.794 [2024-11-06 13:53:05.166378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.178999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.179024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.179030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.191965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.191982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.191989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.202486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.202503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.202510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.215615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.215632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.215638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.229738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.229759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.229765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.243589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.243606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.243613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.256658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.256678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.256685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.268901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.268919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.268925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.279984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.280001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.280007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.293916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.293933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.293939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.306749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.306766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.306772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.317293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.317310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.317316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.330080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.330097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.330103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.343472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.343489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.343496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.355999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.356016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.356023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.368474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.368491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.368498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.382104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.382122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.382128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.394457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.394474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.394480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.404672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.404689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.404696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.055 [2024-11-06 13:53:05.419490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.055 [2024-11-06 13:53:05.419507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.055 [2024-11-06 13:53:05.419513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.316 [2024-11-06 13:53:05.430852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.316 [2024-11-06 13:53:05.430869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.316 [2024-11-06 13:53:05.430876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.316 [2024-11-06 13:53:05.442966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.316 [2024-11-06 13:53:05.442983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.316 [2024-11-06 13:53:05.442989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.316 [2024-11-06 13:53:05.455296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.316 [2024-11-06 13:53:05.455312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.316 [2024-11-06 13:53:05.455319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.316 [2024-11-06 13:53:05.469162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.316 [2024-11-06 13:53:05.469183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.316 [2024-11-06 13:53:05.469190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.316 [2024-11-06 13:53:05.482257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.316 [2024-11-06 13:53:05.482274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.316 [2024-11-06 13:53:05.482281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.316 [2024-11-06 13:53:05.495535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.316 [2024-11-06 13:53:05.495552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.316 [2024-11-06 13:53:05.495558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.316 [2024-11-06 13:53:05.505318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.316 [2024-11-06 13:53:05.505335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.316 [2024-11-06 13:53:05.505341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.316 [2024-11-06 13:53:05.518807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.316 [2024-11-06 13:53:05.518824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.316 [2024-11-06 13:53:05.518830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.316 [2024-11-06 13:53:05.532859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.532876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.532883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.545015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.545032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.545038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.557630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.557646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.557653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.569449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.569466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.569473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.582066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.582083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.582089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.595709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.595726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.595732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.608497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.608514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.608520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.618214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.618231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.618237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.632097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.632113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.632119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 20150.00 IOPS, 78.71 MiB/s [2024-11-06T12:53:05.693Z] [2024-11-06 13:53:05.645772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.645790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.645796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.657074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.657091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.657097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.670772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.670789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.670795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.317 [2024-11-06 13:53:05.683700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.317 [2024-11-06 13:53:05.683717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.317 [2024-11-06 13:53:05.683726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.696373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.696390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.696396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.706891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.706907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.706914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.721733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.721754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.721761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.732362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.732379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.732385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.745616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.745633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.745639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.757983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.757999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.758005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.770841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.770858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.770864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.782484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.782500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.782507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.795500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.795519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.795526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.806151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.806168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.806174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.819395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.819412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.819418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.832219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.832236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.832242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.845593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.845610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.845616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.858241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.858257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.858264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.578 [2024-11-06 13:53:05.870003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.578 [2024-11-06 13:53:05.870020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.578 [2024-11-06 13:53:05.870026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.579 [2024-11-06 13:53:05.882970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.579 [2024-11-06 13:53:05.882987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.579 [2024-11-06 13:53:05.882994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.579 [2024-11-06 13:53:05.895868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.579 [2024-11-06 13:53:05.895884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.579 [2024-11-06 13:53:05.895890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.579 [2024-11-06 13:53:05.908638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.579 [2024-11-06 13:53:05.908655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.579 [2024-11-06 13:53:05.908662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.579 [2024-11-06 13:53:05.920791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.579 [2024-11-06 13:53:05.920808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.579 [2024-11-06 13:53:05.920814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.579 [2024-11-06 13:53:05.932562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.579 [2024-11-06 13:53:05.932578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.579 [2024-11-06 13:53:05.932584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.579 [2024-11-06 13:53:05.945799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.579 [2024-11-06 13:53:05.945815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.579 [2024-11-06 13:53:05.945822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:05.956687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:05.956704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:05.956711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:05.970391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:05.970408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:05.970414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:05.983889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:05.983906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:05.983913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:05.996718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:05.996735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:05.996741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:06.006319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:06.006335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:06.006345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:06.020332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:06.020349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:06.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:06.033917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:06.033934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:06.033940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:06.047436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:06.047453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:06.047459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:06.056958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:06.056975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:06.056981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:06.070037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:06.070054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:06.070060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:06.083216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:06.083233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:06.083239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:06.097074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:06.097091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.840 [2024-11-06 13:53:06.097097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.840 [2024-11-06 13:53:06.110195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.840 [2024-11-06 13:53:06.110212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.841 [2024-11-06 13:53:06.110218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.841 [2024-11-06 13:53:06.122720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.841 [2024-11-06 13:53:06.122738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.841 [2024-11-06 13:53:06.122744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.841 [2024-11-06 13:53:06.135678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.841 [2024-11-06 13:53:06.135695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.841 [2024-11-06 13:53:06.135701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.841 [2024-11-06 13:53:06.147370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.841 [2024-11-06 13:53:06.147386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.841 [2024-11-06 13:53:06.147393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.841 [2024-11-06 13:53:06.158900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.841 [2024-11-06 13:53:06.158917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.841 [2024-11-06 13:53:06.158924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.841 [2024-11-06 13:53:06.171455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.841 [2024-11-06 13:53:06.171472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.841 [2024-11-06 13:53:06.171478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.841 [2024-11-06 13:53:06.184040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.841 [2024-11-06 13:53:06.184057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.841 [2024-11-06 13:53:06.184063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.841 [2024-11-06 13:53:06.198088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.841 [2024-11-06 13:53:06.198105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.841 [2024-11-06 13:53:06.198111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.841 [2024-11-06 13:53:06.211815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:42.841 [2024-11-06 13:53:06.211833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.841 [2024-11-06 13:53:06.211840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.223700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.223718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.223731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.236134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.236152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.236158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.248375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.248392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.248398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.258988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.259005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.259011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.272373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.272390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.272396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.284608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.284625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.284632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.298061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.298078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.298085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.311500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.311517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.311523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.322896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.322913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.322919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.333664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.333684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.333690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.347344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.347361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.347367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.361901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.361918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.361924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.374302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.374319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.374327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.386053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.386069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.386076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.397251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.397267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.397274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.409647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.409663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.409669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.423603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.423620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.423626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.436907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.436924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.436930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.450298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.450315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.450321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.102 [2024-11-06 13:53:06.461293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.102 [2024-11-06 13:53:06.461310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.102 [2024-11-06 13:53:06.461317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.103 [2024-11-06 13:53:06.476000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.103 [2024-11-06 13:53:06.476016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.103 [2024-11-06 13:53:06.476023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.363 [2024-11-06 13:53:06.488655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.363 [2024-11-06 13:53:06.488671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.363 [2024-11-06 13:53:06.488678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.363 [2024-11-06 13:53:06.501809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.363 [2024-11-06 13:53:06.501826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.363 [2024-11-06 13:53:06.501832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.363 [2024-11-06 13:53:06.512917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.363 [2024-11-06 13:53:06.512934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.363 [2024-11-06 13:53:06.512940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.363 [2024-11-06 13:53:06.524649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.363 [2024-11-06 13:53:06.524666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.363 [2024-11-06 13:53:06.524672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.363 [2024-11-06 13:53:06.538955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.363 [2024-11-06 13:53:06.538972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.363 [2024-11-06 13:53:06.538978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.363 [2024-11-06 13:53:06.551703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.363 [2024-11-06 13:53:06.551719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.363 [2024-11-06 13:53:06.551729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.363 [2024-11-06 13:53:06.562427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.363 [2024-11-06 13:53:06.562444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.363 [2024-11-06 13:53:06.562450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.363 [2024-11-06 13:53:06.574201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.363 [2024-11-06 13:53:06.574218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 13:53:06.574225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 13:53:06.588944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.364 [2024-11-06 13:53:06.588960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 13:53:06.588966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 13:53:06.601630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.364 [2024-11-06 13:53:06.601646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 13:53:06.601652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 13:53:06.613012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.364 [2024-11-06 13:53:06.613028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 13:53:06.613035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 13:53:06.625814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.364 [2024-11-06 13:53:06.625831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 13:53:06.625837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 20239.50 IOPS, 79.06 MiB/s [2024-11-06T12:53:06.740Z] [2024-11-06 13:53:06.637774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2d5c0) 00:28:43.364 [2024-11-06 13:53:06.637790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 13:53:06.637796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 00:28:43.364 Latency(us) 00:28:43.364 [2024-11-06T12:53:06.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.364 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:43.364 nvme0n1 : 2.04 19867.96 77.61 0.00 0.00 6310.13 2129.92 48059.73 00:28:43.364 [2024-11-06T12:53:06.740Z] =================================================================================================================== 00:28:43.364 [2024-11-06T12:53:06.740Z] Total : 19867.96 77.61 0.00 0.00 6310.13 2129.92 48059.73 00:28:43.364 { 00:28:43.364 "results": [ 00:28:43.364 { 00:28:43.364 "job": "nvme0n1", 00:28:43.364 "core_mask": "0x2", 00:28:43.364 "workload": "randread", 00:28:43.364 "status": "finished", 00:28:43.364 "queue_depth": 128, 00:28:43.364 "io_size": 4096, 00:28:43.364 "runtime": 2.043843, 00:28:43.364 "iops": 19867.964418010582, 00:28:43.364 "mibps": 77.60923600785384, 00:28:43.364 "io_failed": 0, 00:28:43.364 "io_timeout": 0, 00:28:43.364 "avg_latency_us": 6310.134104300572, 00:28:43.364 "min_latency_us": 2129.92, 00:28:43.364 "max_latency_us": 48059.73333333333 00:28:43.364 } 00:28:43.364 ], 00:28:43.364 "core_count": 1 00:28:43.364 } 00:28:43.364 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:43.364 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:43.364 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:43.364 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:43.364 | .driver_specific 00:28:43.364 | .nvme_error 00:28:43.364 | .status_code 00:28:43.364 | .command_transient_transport_error' 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 816315 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 816315 ']' 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 816315 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 816315 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 816315' 00:28:43.624 killing process with pid 816315 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 816315 00:28:43.624 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.624 00:28:43.624 Latency(us) 00:28:43.624 [2024-11-06T12:53:07.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.624 [2024-11-06T12:53:07.000Z] =================================================================================================================== 00:28:43.624 [2024-11-06T12:53:07.000Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.624 13:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 816315 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=817102 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 817102 /var/tmp/bperf.sock 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 817102 ']' 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:43.913 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.913 [2024-11-06 13:53:07.107943] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:43.913 [2024-11-06 13:53:07.108001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817102 ] 00:28:43.913 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.913 Zero copy mechanism will not be used. 00:28:43.913 [2024-11-06 13:53:07.189744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.913 [2024-11-06 13:53:07.219489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.857 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:44.857 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:44.857 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.857 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.857 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.857 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.857 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.857 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.857 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.857 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.118 nvme0n1 00:28:45.118 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:45.118 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.118 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.118 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.118 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:45.118 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:45.118 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:45.118 Zero copy mechanism will not be used. 00:28:45.118 Running I/O for 2 seconds... 00:28:45.379 [2024-11-06 13:53:08.493788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.493827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.493836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.504727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.504753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.504761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.514244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.514264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.514270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.525434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.525452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.525459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.535400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.535418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.535425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.546762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.546780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.546787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.558005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.558023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.558030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.568770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.568789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.568797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.581059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.581077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.581088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.590909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.590927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.590934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.603528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.603546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.603553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.616383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.616401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.616407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.629236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.629254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.629261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.641276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.641294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.641300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.653118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.653136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.653142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.661005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.661024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.661031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.669118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.669135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.669142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.680215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.680236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.680243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.691649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.691667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.691673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.701164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.701182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.701188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.711150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.711168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.711175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.719509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.719527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.719534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.379 [2024-11-06 13:53:08.727973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.379 [2024-11-06 13:53:08.727991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.379 [2024-11-06 13:53:08.727997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.380 [2024-11-06 13:53:08.737338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.380 [2024-11-06 13:53:08.737355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.380 [2024-11-06 13:53:08.737361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.380 [2024-11-06 13:53:08.744959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.380 [2024-11-06 13:53:08.744977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.380 [2024-11-06 13:53:08.744984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.640 [2024-11-06 13:53:08.755644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.640 [2024-11-06 13:53:08.755662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.640 [2024-11-06 13:53:08.755669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.640 [2024-11-06 13:53:08.766651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.640 [2024-11-06 13:53:08.766669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.640 [2024-11-06 13:53:08.766676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.640 [2024-11-06 13:53:08.776546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.640 [2024-11-06 13:53:08.776563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.640 [2024-11-06 13:53:08.776570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.640 [2024-11-06 13:53:08.787758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.640 [2024-11-06 13:53:08.787775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.640 [2024-11-06 13:53:08.787782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.640 [2024-11-06 13:53:08.796488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.640 [2024-11-06 13:53:08.796506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.640 [2024-11-06 13:53:08.796512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.640 [2024-11-06 13:53:08.805147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.805165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.805171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.815156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.815175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.815181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.825103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.825121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.825127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.833579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.833598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.833604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.842199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.842218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.842227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.851315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.851333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.851339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.861842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.861860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.861867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.873255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.873274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.873280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.884056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.884075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.884082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.894750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.894769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.894775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.905243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.905261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.905268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.916419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.916438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.916444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.928772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.928791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.928797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.940509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.940532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.940538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.951588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.951606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.951613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.961728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.961753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.961760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.969258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.969277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.969283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.977858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.977877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.977883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.986534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.986553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.986559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:08.995123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:08.995141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:08.995147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.641 [2024-11-06 13:53:09.005273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.641 [2024-11-06 13:53:09.005292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.641 [2024-11-06 13:53:09.005299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.015635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.015653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.015663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.025872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.025891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.025897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.034631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.034650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.034656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.042728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.042752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.042758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.051873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.051892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.051898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.064038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.064057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.064064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.076219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.076238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.076244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.087959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.087977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.087983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.099465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.099483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.099490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.902 [2024-11-06 13:53:09.112793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.902 [2024-11-06 13:53:09.112815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.902 [2024-11-06 13:53:09.112821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.124537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.124556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.124562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.134640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.134659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.134665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.147202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.147220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.147227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.160376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.160395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.160401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.172781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.172800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.172806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.184655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.184674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.184680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.193283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.193301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.193307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.205760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.205778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.205784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.218281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.218300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.218307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.229872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.229891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.229897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.237836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.237855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.237861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.247781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.247800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.247806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.256425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.256444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.256450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.265600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.265618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.265625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.903 [2024-11-06 13:53:09.274901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:45.903 [2024-11-06 13:53:09.274919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.903 [2024-11-06 13:53:09.274926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.285059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.285078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.285084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.295065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.295083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.295092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.303447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.303465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.303472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.315081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.315099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.315106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.323162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.323181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.323188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.333468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.333486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.333492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.343508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.343526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.343532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.354192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.354210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.354217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.364312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.364330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.364337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.373322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.373341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.373347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.383622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.383643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.383650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.391997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.392016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.392022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.404250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.404269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.404275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.413953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.413971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.413978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.424473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.424491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.424497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.434882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.434900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.434906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.444759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.444777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.444784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.453537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.453555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.453562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.465374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.465392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.465398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.477570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.477588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.477595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.164 2985.00 IOPS, 373.12 MiB/s [2024-11-06T12:53:09.540Z] [2024-11-06 13:53:09.491352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.491371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.491378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.503404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.503423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.503430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.516572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.516591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.516597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.164 [2024-11-06 13:53:09.529509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.164 [2024-11-06 13:53:09.529528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.164 [2024-11-06 13:53:09.529534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.542075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.542094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.542101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.552760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.552779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.552786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.565973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.565991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.565998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.579166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.579184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.579194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.591266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.591284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.591290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.604362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.604380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.604386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.617483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.617501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.617508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.626419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.626438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.626444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.637812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.637830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.637837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.647380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.647398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.647404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.653716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.653734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.653740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.663399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.663418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.663424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.675003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.675022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.675028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.684240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.684258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.684265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.692939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.692957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.692964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.702742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.702765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.702771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.712587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.712605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.712612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.722359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.722376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.722383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.733715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.733734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.733740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.743238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.743256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.743262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.753301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.753320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.753329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.764867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.764886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.764892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.773541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.773560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.773566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.783218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.783236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.783243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.426 [2024-11-06 13:53:09.794061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.426 [2024-11-06 13:53:09.794079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.426 [2024-11-06 13:53:09.794086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.804014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.804032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.804039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.813925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.813943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.813950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.821001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.821019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.821026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.830561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.830579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.830585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.840274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.840299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.840305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.850621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.850639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.850646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.860593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.860611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.860618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.866480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.866498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.866504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.876282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.876300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.876306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.883629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.883647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.883653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.893811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.893830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.893836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.688 [2024-11-06 13:53:09.903787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.688 [2024-11-06 13:53:09.903805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.688 [2024-11-06 13:53:09.903812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.913162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.913180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.913187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.923913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.923932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.923938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.933427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.933446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.933452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.943422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.943440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.943446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.952814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.952832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.952838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.963417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.963435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.963442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.972536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.972554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.972560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.978780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.978797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.978803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.985454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.985471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.985477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:09.994958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:09.994975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:09.994984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:10.005549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:10.005567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:10.005574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:10.011448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:10.011465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:10.011471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:10.020792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:10.020809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:10.020816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:10.030757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:10.030775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:10.030781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:10.041844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:10.041862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:10.041869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:10.050924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:10.050941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:10.050948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.689 [2024-11-06 13:53:10.059276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.689 [2024-11-06 13:53:10.059293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.689 [2024-11-06 13:53:10.059300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.067476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.067494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.067501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.076539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.076561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.076568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.084902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.084919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.084926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.093168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.093185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.093192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.104372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.104390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.104396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.116547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.116565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.116571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.130040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.130057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.130064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.141550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.141568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.141574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.151665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.151682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.151689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.163167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.163184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.163190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.173844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.173862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.173868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.186233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.186251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.186258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.198173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.198191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.198198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.210122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.210140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.210147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.218946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.218964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.218970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.229269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.229287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.229294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.239597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.239616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.239622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.250624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.250642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.250649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.258893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.258911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.258921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.268335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.268353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.268360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.279305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.279324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.279330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.288884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.288903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.288909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.298699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.298717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.950 [2024-11-06 13:53:10.298723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.950 [2024-11-06 13:53:10.305576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.950 [2024-11-06 13:53:10.305596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.951 [2024-11-06 13:53:10.305604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.951 [2024-11-06 13:53:10.315807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:46.951 [2024-11-06 13:53:10.315825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.951 [2024-11-06 13:53:10.315831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.326199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.326218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.326224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.336423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.336442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.336448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.343968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.343990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.343997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.352969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.352986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.352993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.362201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.362220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.362227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.373532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.373551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.373558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.384470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.384488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.384494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.395966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.395984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.395991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.406101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.406119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.406125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.413907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.413925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.211 [2024-11-06 13:53:10.413931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.211 [2024-11-06 13:53:10.423456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.211 [2024-11-06 13:53:10.423474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.212 [2024-11-06 13:53:10.423484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.212 [2024-11-06 13:53:10.432518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.212 [2024-11-06 13:53:10.432536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.212 [2024-11-06 13:53:10.432542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.212 [2024-11-06 13:53:10.442091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.212 [2024-11-06 13:53:10.442110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.212 [2024-11-06 13:53:10.442116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.212 [2024-11-06 13:53:10.451213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.212 [2024-11-06 13:53:10.451232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.212 [2024-11-06 13:53:10.451238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.212 [2024-11-06 13:53:10.460051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.212 [2024-11-06 13:53:10.460069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.212 [2024-11-06 13:53:10.460076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.212 [2024-11-06 13:53:10.469043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.212 [2024-11-06 13:53:10.469062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.212 [2024-11-06 13:53:10.469068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.212 [2024-11-06 13:53:10.480886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2344a20) 00:28:47.212 [2024-11-06 13:53:10.480904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.212 [2024-11-06 13:53:10.480910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.212 3041.50 IOPS, 380.19 MiB/s 00:28:47.212 Latency(us) 00:28:47.212 [2024-11-06T12:53:10.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.212 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:47.212 nvme0n1 : 2.00 3042.99 380.37 0.00 0.00 5255.34 1099.09 14090.24 00:28:47.212 [2024-11-06T12:53:10.588Z] =================================================================================================================== 00:28:47.212 [2024-11-06T12:53:10.588Z] Total : 3042.99 380.37 0.00 0.00 5255.34 1099.09 14090.24 00:28:47.212 { 00:28:47.212 "results": [ 00:28:47.212 { 00:28:47.212 "job": "nvme0n1", 00:28:47.212 "core_mask": "0x2", 00:28:47.212 "workload": "randread", 00:28:47.212 "status": "finished", 00:28:47.212 "queue_depth": 16, 00:28:47.212 "io_size": 131072, 00:28:47.212 "runtime": 2.00428, 00:28:47.212 "iops": 3042.9880056678708, 00:28:47.212 "mibps": 380.37350070848385, 00:28:47.212 "io_failed": 0, 00:28:47.212 "io_timeout": 0, 00:28:47.212 "avg_latency_us": 5255.33567251462, 00:28:47.212 "min_latency_us": 1099.0933333333332, 00:28:47.212 "max_latency_us": 14090.24 00:28:47.212 } 00:28:47.212 ], 00:28:47.212 "core_count": 1 00:28:47.212 } 00:28:47.212 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:47.212 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:47.212 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:47.212 | .driver_specific 00:28:47.212 | .nvme_error 00:28:47.212 | .status_code 00:28:47.212 | .command_transient_transport_error' 00:28:47.212 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 817102 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 817102 ']' 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 817102 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 817102 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 817102' 00:28:47.473 killing process with pid 817102 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 817102 00:28:47.473 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.473 00:28:47.473 Latency(us) 00:28:47.473 [2024-11-06T12:53:10.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.473 [2024-11-06T12:53:10.849Z] =================================================================================================================== 00:28:47.473 [2024-11-06T12:53:10.849Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.473 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 817102 00:28:47.734 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:47.734 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:47.734 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:47.734 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:47.734 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:47.734 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=817893 00:28:47.735 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 817893 /var/tmp/bperf.sock 00:28:47.735 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 817893 ']' 00:28:47.735 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:47.735 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.735 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:47.735 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.735 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:47.735 13:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.735 [2024-11-06 13:53:10.905508] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:47.735 [2024-11-06 13:53:10.905567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817893 ] 00:28:47.735 [2024-11-06 13:53:10.989814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.735 [2024-11-06 13:53:11.018840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.676 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.936 nvme0n1 00:28:48.936 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:48.936 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.936 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.936 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.936 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:48.936 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.936 Running I/O for 2 seconds... 00:28:48.936 [2024-11-06 13:53:12.211327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:48.936 [2024-11-06 13:53:12.211642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.936 [2024-11-06 13:53:12.211668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:53:12.224011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:48.936 [2024-11-06 13:53:12.224318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.936 [2024-11-06 13:53:12.224341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:53:12.236663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:48.936 [2024-11-06 13:53:12.236965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.936 [2024-11-06 13:53:12.236983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:53:12.249283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:48.936 [2024-11-06 13:53:12.249556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.936 [2024-11-06 13:53:12.249573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:53:12.261896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:48.936 [2024-11-06 13:53:12.262210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.936 [2024-11-06 13:53:12.262227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:53:12.274483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:48.936 [2024-11-06 13:53:12.274774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.936 [2024-11-06 13:53:12.274791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:53:12.287049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:48.936 [2024-11-06 13:53:12.287320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.936 [2024-11-06 13:53:12.287337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:53:12.299623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:48.936 [2024-11-06 13:53:12.299912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.936 [2024-11-06 13:53:12.299929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.312174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.312465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.312481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.324845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.325144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.325160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.337443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.337743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.337767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.349997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.350283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.350300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.362556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.362834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.362851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.375109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.375370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.375386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.387653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.387965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.387982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.400212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.400513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.400529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.412793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.413088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.413105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.425369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.425633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.425649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.437926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.438192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.438208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.450483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.450757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.450773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.463055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.463327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.463343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.475625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.475914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.475931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.488192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.488457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.488474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.500770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.501079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.501095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.513338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.513604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.513621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.525911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.526174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.197 [2024-11-06 13:53:12.526191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.197 [2024-11-06 13:53:12.538484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.197 [2024-11-06 13:53:12.538650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.198 [2024-11-06 13:53:12.538666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.198 [2024-11-06 13:53:12.551019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.198 [2024-11-06 13:53:12.551309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.198 [2024-11-06 13:53:12.551326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.198 [2024-11-06 13:53:12.563611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.198 [2024-11-06 13:53:12.563939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.198 [2024-11-06 13:53:12.563955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.458 [2024-11-06 13:53:12.576126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.458 [2024-11-06 13:53:12.576436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.458 [2024-11-06 13:53:12.576452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.458 [2024-11-06 13:53:12.588711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.458 [2024-11-06 13:53:12.589000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.458 [2024-11-06 13:53:12.589015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.458 [2024-11-06 13:53:12.601275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.458 [2024-11-06 13:53:12.601540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.458 [2024-11-06 13:53:12.601556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.458 [2024-11-06 13:53:12.613850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.458 [2024-11-06 13:53:12.614123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.458 [2024-11-06 13:53:12.614139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.458 [2024-11-06 13:53:12.626389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.458 [2024-11-06 13:53:12.626667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.626683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.638982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.639293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.639309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.651534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.651796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.651813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.664310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.664590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.664610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.676878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.677199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.677216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.689436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.689698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.689714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.702019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.702318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.702334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.714557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.714812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.714828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.727077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.727390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.727406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.739695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.739981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.739998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.752285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.752556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.752571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.764897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.765069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.765085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.777466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.777736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.777756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.790029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.790349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.790366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.802581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.802846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.802862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.815146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.815316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.815332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.459 [2024-11-06 13:53:12.827706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.459 [2024-11-06 13:53:12.828006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.459 [2024-11-06 13:53:12.828022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.840280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.840592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.840608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.852835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.853102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.853118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.865372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.865668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.865683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.877949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.878330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.878346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.890497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.890816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.890832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.903059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.903345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.903361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.915588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.915903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.915919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.928126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.928390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.928406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.940656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.940965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.940981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.953257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.953568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.953584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.965797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.966088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.966103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.978342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.978537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.978553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:12.990903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:12.991162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:12.991181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:13.003426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:13.003795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:13.003811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:13.016043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:13.016305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:13.016321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:13.028579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:13.028922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:13.028938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:13.041143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.720 [2024-11-06 13:53:13.041399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.720 [2024-11-06 13:53:13.041415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.720 [2024-11-06 13:53:13.053695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.721 [2024-11-06 13:53:13.053897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.721 [2024-11-06 13:53:13.053913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.721 [2024-11-06 13:53:13.066249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.721 [2024-11-06 13:53:13.066529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.721 [2024-11-06 13:53:13.066544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.721 [2024-11-06 13:53:13.078839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.721 [2024-11-06 13:53:13.079031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.721 [2024-11-06 13:53:13.079046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.721 [2024-11-06 13:53:13.091379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.721 [2024-11-06 13:53:13.091749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.721 [2024-11-06 13:53:13.091765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.103933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.104205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.104220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.116491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.116806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.116822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.129049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.129345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.129361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.141548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.141753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.141769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.154130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.154416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.154431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.166678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.166887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.166903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.179257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.179547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.179562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.191841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.192114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.192129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 20300.00 IOPS, 79.30 MiB/s [2024-11-06T12:53:13.358Z] [2024-11-06 13:53:13.204404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.204778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.204794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.216980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.217177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.217192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.229518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.229773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.229789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.242084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.242280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.242296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.254630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.254819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.254835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.267183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.267446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.267462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.279744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.280037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.280052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.292330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.292596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.292611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.304887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.305180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.305196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.317419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.982 [2024-11-06 13:53:13.317613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.982 [2024-11-06 13:53:13.317632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.982 [2024-11-06 13:53:13.329966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.983 [2024-11-06 13:53:13.330168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.983 [2024-11-06 13:53:13.330183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.983 [2024-11-06 13:53:13.342592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.983 [2024-11-06 13:53:13.342883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.983 [2024-11-06 13:53:13.342899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.983 [2024-11-06 13:53:13.355138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:49.983 [2024-11-06 13:53:13.355521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.983 [2024-11-06 13:53:13.355537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.367699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.367995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.368010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.380270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.380464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.380479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.392821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.393015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.393031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.405376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.405754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.405770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.417901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.418097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.418112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.430445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.430738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.430758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.443019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.443214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.443229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.455571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.455766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.455783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.468117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.468311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.468327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.480662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.480968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.244 [2024-11-06 13:53:13.480984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.244 [2024-11-06 13:53:13.493216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.244 [2024-11-06 13:53:13.493485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.493501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.245 [2024-11-06 13:53:13.505785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.245 [2024-11-06 13:53:13.506081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.506097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.245 [2024-11-06 13:53:13.518330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.245 [2024-11-06 13:53:13.518705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.518721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.245 [2024-11-06 13:53:13.530880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.245 [2024-11-06 13:53:13.531091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.531107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.245 [2024-11-06 13:53:13.543422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.245 [2024-11-06 13:53:13.543700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.543715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.245 [2024-11-06 13:53:13.555992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.245 [2024-11-06 13:53:13.556255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.556272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.245 [2024-11-06 13:53:13.568537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.245 [2024-11-06 13:53:13.568708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.568724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.245 [2024-11-06 13:53:13.581096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.245 [2024-11-06 13:53:13.581452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.581467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.245 [2024-11-06 13:53:13.593649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.245 [2024-11-06 13:53:13.593846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.593862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.245 [2024-11-06 13:53:13.606208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.245 [2024-11-06 13:53:13.606486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.245 [2024-11-06 13:53:13.606501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.618774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.619073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.619090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.631341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.631608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.631624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.643915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.644217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.644235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.656669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.656966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.656982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.669236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.669543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.669558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.681779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.682032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.682048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.694330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.694704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.694720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.706882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.707203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.707219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.719423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.719704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.719720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.732006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.732313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.732329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.744564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.744876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.744892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.757116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.757311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.757327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.769676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.770000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.770017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.782263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.782453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.782468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.794800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.795056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.795071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.807347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.807616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.807632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.819901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.820162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.820178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.832483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.832847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.832863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.845047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.845206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.845222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.857596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.857769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.857785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.506 [2024-11-06 13:53:13.870160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.506 [2024-11-06 13:53:13.870453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.506 [2024-11-06 13:53:13.870469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.882734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.883051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.883067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.895309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.895602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.895618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.907931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.908233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.908249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.920488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.920686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.920702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.933077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.933370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.933386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.945627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.945946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.945962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.958220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.958513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.958529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.970788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.971162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.971181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.983296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.983537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.983553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:13.995880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:13.996257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:13.996273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.008445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.008720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.008735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.020985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.021197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.021214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.033587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.033784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.033800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.046157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.046352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.046368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.058724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.058926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.058941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.071298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.071585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.071601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.083853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.084140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.084158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.096437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.096704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.096719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.109096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.109368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.109383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.121621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.121899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.121915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.768 [2024-11-06 13:53:14.134213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:50.768 [2024-11-06 13:53:14.134487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.768 [2024-11-06 13:53:14.134503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.029 [2024-11-06 13:53:14.146788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:51.029 [2024-11-06 13:53:14.147061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.029 [2024-11-06 13:53:14.147077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.029 [2024-11-06 13:53:14.159357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:51.029 [2024-11-06 13:53:14.159619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.029 [2024-11-06 13:53:14.159634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.029 [2024-11-06 13:53:14.171987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:51.029 [2024-11-06 13:53:14.172278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.029 [2024-11-06 13:53:14.172294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.029 [2024-11-06 13:53:14.184533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:51.029 [2024-11-06 13:53:14.184702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.029 [2024-11-06 13:53:14.184718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.029 [2024-11-06 13:53:14.197052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f520) with pdu=0x2000166fe2e8 00:28:51.029 [2024-11-06 13:53:14.197339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.029 [2024-11-06 13:53:14.197355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.029 20318.00 IOPS, 79.37 MiB/s 00:28:51.029 Latency(us) 00:28:51.029 [2024-11-06T12:53:14.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.029 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.029 nvme0n1 : 2.01 20320.86 79.38 0.00 0.00 6287.29 1911.47 12779.52 00:28:51.029 [2024-11-06T12:53:14.405Z] =================================================================================================================== 00:28:51.029 [2024-11-06T12:53:14.405Z] Total : 20320.86 79.38 0.00 0.00 6287.29 1911.47 12779.52 00:28:51.029 { 00:28:51.029 "results": [ 00:28:51.029 { 00:28:51.029 "job": "nvme0n1", 00:28:51.029 "core_mask": "0x2", 00:28:51.029 "workload": "randwrite", 00:28:51.029 "status": "finished", 00:28:51.029 "queue_depth": 128, 00:28:51.029 "io_size": 4096, 00:28:51.029 "runtime": 2.006017, 00:28:51.029 "iops": 20320.864678614387, 00:28:51.029 "mibps": 79.37837765083745, 00:28:51.029 "io_failed": 0, 00:28:51.029 "io_timeout": 0, 00:28:51.029 "avg_latency_us": 6287.294185065254, 00:28:51.029 "min_latency_us": 1911.4666666666667, 00:28:51.029 "max_latency_us": 12779.52 00:28:51.029 } 00:28:51.029 ], 00:28:51.029 "core_count": 1 00:28:51.029 } 00:28:51.029 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:51.029 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:51.029 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:51.029 | .driver_specific 00:28:51.029 | .nvme_error 00:28:51.029 | .status_code 00:28:51.029 | .command_transient_transport_error' 00:28:51.029 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 817893 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 817893 ']' 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 817893 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 817893 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 817893' 00:28:51.333 killing process with pid 817893 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 817893 00:28:51.333 Received shutdown signal, test time was about 2.000000 seconds 00:28:51.333 00:28:51.333 Latency(us) 00:28:51.333 [2024-11-06T12:53:14.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.333 [2024-11-06T12:53:14.709Z] =================================================================================================================== 00:28:51.333 [2024-11-06T12:53:14.709Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 817893 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=818665 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 818665 /var/tmp/bperf.sock 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 818665 ']' 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:51.333 13:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.333 [2024-11-06 13:53:14.635834] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:28:51.333 [2024-11-06 13:53:14.635893] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818665 ] 00:28:51.333 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.333 Zero copy mechanism will not be used. 00:28:51.636 [2024-11-06 13:53:14.720276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.636 [2024-11-06 13:53:14.748488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.273 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.844 nvme0n1 00:28:52.844 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:52.844 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.844 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.844 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.844 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:52.844 13:53:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.844 Zero copy mechanism will not be used. 00:28:52.844 Running I/O for 2 seconds... 00:28:52.844 [2024-11-06 13:53:16.054909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.844 [2024-11-06 13:53:16.055262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.844 [2024-11-06 13:53:16.055291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.844 [2024-11-06 13:53:16.066195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.844 [2024-11-06 13:53:16.066529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.844 [2024-11-06 13:53:16.066550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.844 [2024-11-06 13:53:16.074042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.844 [2024-11-06 13:53:16.074376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.844 [2024-11-06 13:53:16.074395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.844 [2024-11-06 13:53:16.082115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.844 [2024-11-06 13:53:16.082434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.844 [2024-11-06 13:53:16.082452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.844 [2024-11-06 13:53:16.089097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.844 [2024-11-06 13:53:16.089411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.844 [2024-11-06 13:53:16.089429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.844 [2024-11-06 13:53:16.095438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.844 [2024-11-06 13:53:16.095643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.844 [2024-11-06 13:53:16.095660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.102415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.102616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.102638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.109524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.109726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.109743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.119049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.119372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.119389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.125232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.125436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.125453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.135380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.135711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.135728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.142391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.142713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.142730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.151854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.152182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.152199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.157356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.157709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.157726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.165001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.165204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.165221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.171943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.172195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.172212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.180636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.180842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.180860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.188655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.188984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.189002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.199434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.199512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.199527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.845 [2024-11-06 13:53:16.209169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:52.845 [2024-11-06 13:53:16.209521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.845 [2024-11-06 13:53:16.209538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.219652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.219860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.219878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.230247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.230620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.230638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.242220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.242559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.242576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.254032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.254368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.254386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.266518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.266865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.266882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.278706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.279073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.279090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.290062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.290262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.290278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.302702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.303076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.303093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.314317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.314670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.314687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.326130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.326473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.326490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.338484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.338830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.338847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.351315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.351530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.351547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.362874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.363194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.363214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.374286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.374597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.374614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.106 [2024-11-06 13:53:16.385765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.106 [2024-11-06 13:53:16.386121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-11-06 13:53:16.386138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.107 [2024-11-06 13:53:16.397403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.107 [2024-11-06 13:53:16.397734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.107 [2024-11-06 13:53:16.397754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.107 [2024-11-06 13:53:16.408948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.107 [2024-11-06 13:53:16.409350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.107 [2024-11-06 13:53:16.409367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.107 [2024-11-06 13:53:16.421137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.107 [2024-11-06 13:53:16.421464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.107 [2024-11-06 13:53:16.421482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.107 [2024-11-06 13:53:16.433074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.107 [2024-11-06 13:53:16.433294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.107 [2024-11-06 13:53:16.433311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.107 [2024-11-06 13:53:16.445107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.107 [2024-11-06 13:53:16.445436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.107 [2024-11-06 13:53:16.445454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.107 [2024-11-06 13:53:16.456757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.107 [2024-11-06 13:53:16.457023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.107 [2024-11-06 13:53:16.457039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.107 [2024-11-06 13:53:16.466848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.107 [2024-11-06 13:53:16.467209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.107 [2024-11-06 13:53:16.467226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.107 [2024-11-06 13:53:16.475723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.107 [2024-11-06 13:53:16.476078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.107 [2024-11-06 13:53:16.476094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.484931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.485268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.485285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.493699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.493910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.493927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.502450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.502784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.502801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.511653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.512072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.512089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.520344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.520676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.520693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.530242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.530580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.530598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.539085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.539385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.539402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.548652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.548987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.549005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.556964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.557177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.557194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.565306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.565582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.565597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.576086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.576394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.576412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.586209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.586641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.586660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.596744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.368 [2024-11-06 13:53:16.597063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-11-06 13:53:16.597080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-11-06 13:53:16.609057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.609424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.609440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.620206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.620570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.620588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.631654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.632035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.632055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.643231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.643602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.643618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.655248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.655626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.655643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.667454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.667785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.667802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.676176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.676375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.676392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.683638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.683971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.683988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.691096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.691418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.691435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.699090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.699358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.699374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.704671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.704951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.704967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.712515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.712848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.712865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.719496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.719697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.719714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.724761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.724975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.724991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.729672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.729995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.730013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.369 [2024-11-06 13:53:16.734469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.369 [2024-11-06 13:53:16.734671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-11-06 13:53:16.734687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.630 [2024-11-06 13:53:16.742442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.630 [2024-11-06 13:53:16.742784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.630 [2024-11-06 13:53:16.742802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.630 [2024-11-06 13:53:16.752183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.630 [2024-11-06 13:53:16.752503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.752520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.757555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.757761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.765525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.765838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.765858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.774310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.774617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.774634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.780814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.781141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.781158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.786860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.787064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.787080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.793411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.793753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.793770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.799554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.799761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.799778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.805932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.806143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.806159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.815559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.815904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.815921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.825186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.825529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.825546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.833124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.833464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.833480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.841410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.841789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.841806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.847370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.847686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.847703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.853084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.853421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.853438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.858882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.859083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.859100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.869372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.869720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.869737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.876857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.877195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.877211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.882636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.882843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.882860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.889215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.889523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.889540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.894725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.894934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.894951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.902964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.903293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.903310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.909869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.910112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.910128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.916938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.917264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.917281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.923094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.923426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.923443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.929523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.929804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.929819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-11-06 13:53:16.938426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.631 [2024-11-06 13:53:16.938749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-11-06 13:53:16.938767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.632 [2024-11-06 13:53:16.945079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.632 [2024-11-06 13:53:16.945394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.632 [2024-11-06 13:53:16.945411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.632 [2024-11-06 13:53:16.953397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.632 [2024-11-06 13:53:16.953777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.632 [2024-11-06 13:53:16.953797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.632 [2024-11-06 13:53:16.961249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.632 [2024-11-06 13:53:16.961565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.632 [2024-11-06 13:53:16.961582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.632 [2024-11-06 13:53:16.970761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.632 [2024-11-06 13:53:16.971070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.632 [2024-11-06 13:53:16.971087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.632 [2024-11-06 13:53:16.979350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.632 [2024-11-06 13:53:16.979684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.632 [2024-11-06 13:53:16.979701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.632 [2024-11-06 13:53:16.985846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.632 [2024-11-06 13:53:16.986049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.632 [2024-11-06 13:53:16.986066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.632 [2024-11-06 13:53:16.992005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.632 [2024-11-06 13:53:16.992321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.632 [2024-11-06 13:53:16.992338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.632 [2024-11-06 13:53:16.999193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.632 [2024-11-06 13:53:16.999393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.632 [2024-11-06 13:53:16.999409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.007218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.007542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.007558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.013396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.013729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.013751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.021198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.021405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.021422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.027461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.027777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.027793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.034359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.034674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.034691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.894 3528.00 IOPS, 441.00 MiB/s [2024-11-06T12:53:17.270Z] [2024-11-06 13:53:17.043126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.043433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.043450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.051035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.051370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.051386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.056666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.056933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.056948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.064156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.064489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.064505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.071293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.071626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.071642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.078696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.078909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.078926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.084123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.084325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.084342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.091303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.091672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.091689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.097667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.098021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.098038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.103741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.103963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.103980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.110867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.111046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.111062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.117366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.117581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.117598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.124648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.124888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.124905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.129676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.129861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.129878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.138587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.138780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.138800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.146804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.894 [2024-11-06 13:53:17.147109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.894 [2024-11-06 13:53:17.147126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.894 [2024-11-06 13:53:17.155699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.155886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.155903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.160994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.161174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.161191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.168073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.168258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.168274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.175576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.175786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.175803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.183771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.183964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.183981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.191492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.191811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.191828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.199017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.199205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.199222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.204274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.204679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.204696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.211629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.211859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.211876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.218381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.218608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.218625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.226159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.226462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.226479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.234128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.234323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.234340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.241119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.241434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.241451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.250350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.250671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.250688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.258363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:53.895 [2024-11-06 13:53:17.258682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.895 [2024-11-06 13:53:17.258699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.895 [2024-11-06 13:53:17.267424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.267719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.267739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.274194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.274526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.274542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.281577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.281877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.281894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.289698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.290026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.290043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.298943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.299259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.299276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.308564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.308760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.308777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.318541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.318829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.318846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.327361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.327598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.327615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.335733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.336026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.336042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.343537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.343849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.352408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.352764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.352781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.361569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.361884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.361901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.369841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.157 [2024-11-06 13:53:17.370153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.157 [2024-11-06 13:53:17.370170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.157 [2024-11-06 13:53:17.378003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.378329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.378347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.385699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.386023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.386040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.394274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.394616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.394632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.402502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.402692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.402709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.410873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.411202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.411218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.417906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.418085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.418102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.423398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.423710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.423726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.429972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.430245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.430262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.437498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.437822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.437839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.443965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.444265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.444282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.450755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.450958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.450974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.456645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.456964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.456981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.464506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.464767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.464784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.471125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.471518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.471541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.477984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.478284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.478301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.483060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.483373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.483390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.490384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.490699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.490716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.495573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.495918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.495935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.503390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.503675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.503692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.510284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.510520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.510537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.517259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.517498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.517515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.158 [2024-11-06 13:53:17.525899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.158 [2024-11-06 13:53:17.526176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.158 [2024-11-06 13:53:17.526193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.531055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.531370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.531388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.538929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.539223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.539240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.546126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.546426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.546443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.553501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.553796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.553813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.560969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.561213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.561230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.567916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.568133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.568149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.574497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.574675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.574692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.582727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.582996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.583013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.587626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.587944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.587961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.594521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.594813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.594831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.602857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.603162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.603179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.609859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.610037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.610054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.616664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.616949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.616966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.624841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.625130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.625146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.630430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.630623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.630639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.421 [2024-11-06 13:53:17.636085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.421 [2024-11-06 13:53:17.636363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.421 [2024-11-06 13:53:17.636380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.643944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.644246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.644263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.650729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.651049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.651069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.655665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.655962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.655979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.661924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.662189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.662205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.667004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.667184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.667201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.674568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.674837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.674854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.683065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.683252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.683269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.691287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.691466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.691483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.697070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.697309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.697326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.703157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.703355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.703371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.711070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.711378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.711395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.721729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.722035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.722051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.728384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.728689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.728706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.738731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.738990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.739007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.747828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.748059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.748076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.754461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.754695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.754712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.760099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.760278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.760295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.767142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.767324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.767341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.776083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.776334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.776353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.783860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.784123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.784139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.422 [2024-11-06 13:53:17.790886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.422 [2024-11-06 13:53:17.791133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.422 [2024-11-06 13:53:17.791150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.683 [2024-11-06 13:53:17.799934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.683 [2024-11-06 13:53:17.800184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.683 [2024-11-06 13:53:17.800201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.683 [2024-11-06 13:53:17.808798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.683 [2024-11-06 13:53:17.809143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.683 [2024-11-06 13:53:17.809159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.683 [2024-11-06 13:53:17.817180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.683 [2024-11-06 13:53:17.817421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.683 [2024-11-06 13:53:17.817438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.824772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.825151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.825168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.832987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.833188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.833205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.840170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.840348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.840365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.848491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.848799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.848816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.855982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.856172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.856189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.865334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.865693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.865710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.873635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.873870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.873887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.881799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.881991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.882008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.889819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.890255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.890272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.898529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.898840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.898857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.907304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.907601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.907618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.915519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.915710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.915726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.923560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.923863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.923880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.933389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.933706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.933723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.939771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.939952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.939969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.949052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.949231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.949249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.958163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.958533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.958550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.966364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.966711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.966728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.974848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.975119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.975136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.982860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.983161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.983178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.991293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.991587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.991607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:17.999599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:17.999876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:17.999893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:18.008960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:18.009250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:18.009266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:18.017473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:18.017827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:18.017843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:18.024939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:18.025237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:18.025254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:18.033506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.684 [2024-11-06 13:53:18.033800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.684 [2024-11-06 13:53:18.033817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.684 [2024-11-06 13:53:18.040872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x246f860) with pdu=0x2000166fef90 00:28:54.685 [2024-11-06 13:53:18.041211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.685 [2024-11-06 13:53:18.041228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.685 3816.00 IOPS, 477.00 MiB/s 00:28:54.685 Latency(us) 00:28:54.685 [2024-11-06T12:53:18.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.685 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:54.685 nvme0n1 : 2.01 3815.51 476.94 0.00 0.00 4187.07 1884.16 12724.91 00:28:54.685 [2024-11-06T12:53:18.061Z] =================================================================================================================== 00:28:54.685 [2024-11-06T12:53:18.061Z] Total : 3815.51 476.94 0.00 0.00 4187.07 1884.16 12724.91 00:28:54.685 { 00:28:54.685 "results": [ 00:28:54.685 { 00:28:54.685 "job": "nvme0n1", 00:28:54.685 "core_mask": "0x2", 00:28:54.685 "workload": "randwrite", 00:28:54.685 "status": "finished", 00:28:54.685 "queue_depth": 16, 00:28:54.685 "io_size": 131072, 00:28:54.685 "runtime": 2.005497, 00:28:54.685 "iops": 3815.5130623481364, 00:28:54.685 "mibps": 476.93913279351705, 00:28:54.685 "io_failed": 0, 00:28:54.685 "io_timeout": 0, 00:28:54.685 "avg_latency_us": 4187.066234535633, 00:28:54.685 "min_latency_us": 1884.16, 00:28:54.685 "max_latency_us": 12724.906666666666 00:28:54.685 } 00:28:54.685 ], 00:28:54.685 "core_count": 1 00:28:54.685 } 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:54.945 | .driver_specific 00:28:54.945 | .nvme_error 00:28:54.945 | .status_code 00:28:54.945 | .command_transient_transport_error' 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 246 > 0 )) 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 818665 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 818665 ']' 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 818665 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 818665 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 818665' 00:28:54.945 killing process with pid 818665 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 818665 00:28:54.945 Received shutdown signal, test time was about 2.000000 seconds 00:28:54.945 00:28:54.945 Latency(us) 00:28:54.945 [2024-11-06T12:53:18.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.945 [2024-11-06T12:53:18.321Z] =================================================================================================================== 00:28:54.945 [2024-11-06T12:53:18.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.945 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 818665 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 816266 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 816266 ']' 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 816266 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 816266 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 816266' 00:28:55.206 killing process with pid 816266 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 816266 00:28:55.206 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 816266 00:28:55.466 00:28:55.466 real 0m16.343s 00:28:55.466 user 0m32.396s 00:28:55.466 sys 0m3.487s 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.466 ************************************ 00:28:55.466 END TEST nvmf_digest_error 00:28:55.466 ************************************ 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.466 rmmod nvme_tcp 00:28:55.466 rmmod nvme_fabrics 00:28:55.466 rmmod nvme_keyring 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 816266 ']' 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 816266 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 816266 ']' 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 816266 00:28:55.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (816266) - No such process 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 816266 is not found' 00:28:55.466 Process with pid 816266 is not found 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.466 13:53:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.012 00:28:58.012 real 0m43.035s 00:28:58.012 user 1m7.754s 00:28:58.012 sys 0m12.767s 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:58.012 ************************************ 00:28:58.012 END TEST nvmf_digest 00:28:58.012 ************************************ 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.012 ************************************ 00:28:58.012 START TEST nvmf_bdevperf 00:28:58.012 ************************************ 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:58.012 * Looking for test storage... 00:28:58.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:58.012 13:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.012 --rc genhtml_branch_coverage=1 00:28:58.012 --rc genhtml_function_coverage=1 00:28:58.012 --rc genhtml_legend=1 00:28:58.012 --rc geninfo_all_blocks=1 00:28:58.012 --rc geninfo_unexecuted_blocks=1 00:28:58.012 00:28:58.012 ' 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.012 --rc genhtml_branch_coverage=1 00:28:58.012 --rc genhtml_function_coverage=1 00:28:58.012 --rc genhtml_legend=1 00:28:58.012 --rc geninfo_all_blocks=1 00:28:58.012 --rc geninfo_unexecuted_blocks=1 00:28:58.012 00:28:58.012 ' 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.012 --rc genhtml_branch_coverage=1 00:28:58.012 --rc genhtml_function_coverage=1 00:28:58.012 --rc genhtml_legend=1 00:28:58.012 --rc geninfo_all_blocks=1 00:28:58.012 --rc geninfo_unexecuted_blocks=1 00:28:58.012 00:28:58.012 ' 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.012 --rc genhtml_branch_coverage=1 00:28:58.012 --rc genhtml_function_coverage=1 00:28:58.012 --rc genhtml_legend=1 00:28:58.012 --rc geninfo_all_blocks=1 00:28:58.012 --rc geninfo_unexecuted_blocks=1 00:28:58.012 00:28:58.012 ' 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.012 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.013 13:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:06.156 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:06.156 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.156 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:06.157 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:06.157 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:29:06.157 00:29:06.157 --- 10.0.0.2 ping statistics --- 00:29:06.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.157 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:29:06.157 00:29:06.157 --- 10.0.0.1 ping statistics --- 00:29:06.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.157 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=823420 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 823420 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 823420 ']' 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:06.157 13:53:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.157 [2024-11-06 13:53:28.492771] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:29:06.157 [2024-11-06 13:53:28.492841] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.157 [2024-11-06 13:53:28.594020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:06.157 [2024-11-06 13:53:28.646072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.157 [2024-11-06 13:53:28.646125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.157 [2024-11-06 13:53:28.646133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.157 [2024-11-06 13:53:28.646141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.157 [2024-11-06 13:53:28.646148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.157 [2024-11-06 13:53:28.647960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.157 [2024-11-06 13:53:28.648368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.157 [2024-11-06 13:53:28.648372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.157 [2024-11-06 13:53:29.355653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.157 Malloc0 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.157 [2024-11-06 13:53:29.419770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.157 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.157 { 00:29:06.157 "params": { 00:29:06.157 "name": "Nvme$subsystem", 00:29:06.157 "trtype": "$TEST_TRANSPORT", 00:29:06.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.157 "adrfam": "ipv4", 00:29:06.157 "trsvcid": "$NVMF_PORT", 00:29:06.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.158 "hdgst": ${hdgst:-false}, 00:29:06.158 "ddgst": ${ddgst:-false} 00:29:06.158 }, 00:29:06.158 "method": "bdev_nvme_attach_controller" 00:29:06.158 } 00:29:06.158 EOF 00:29:06.158 )") 00:29:06.158 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:06.158 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:06.158 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:06.158 13:53:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:06.158 "params": { 00:29:06.158 "name": "Nvme1", 00:29:06.158 "trtype": "tcp", 00:29:06.158 "traddr": "10.0.0.2", 00:29:06.158 "adrfam": "ipv4", 00:29:06.158 "trsvcid": "4420", 00:29:06.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.158 "hdgst": false, 00:29:06.158 "ddgst": false 00:29:06.158 }, 00:29:06.158 "method": "bdev_nvme_attach_controller" 00:29:06.158 }' 00:29:06.158 [2024-11-06 13:53:29.483946] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:29:06.158 [2024-11-06 13:53:29.483996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823726 ] 00:29:06.418 [2024-11-06 13:53:29.554305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.418 [2024-11-06 13:53:29.590251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.677 Running I/O for 1 seconds... 00:29:07.620 8963.00 IOPS, 35.01 MiB/s 00:29:07.620 Latency(us) 00:29:07.620 [2024-11-06T12:53:30.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.620 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:07.620 Verification LBA range: start 0x0 length 0x4000 00:29:07.620 Nvme1n1 : 1.01 9022.33 35.24 0.00 0.00 14113.94 1399.47 14090.24 00:29:07.620 [2024-11-06T12:53:30.996Z] =================================================================================================================== 00:29:07.620 [2024-11-06T12:53:30.996Z] Total : 9022.33 35.24 0.00 0.00 14113.94 1399.47 14090.24 00:29:07.620 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=824059 00:29:07.620 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:07.621 { 00:29:07.621 "params": { 00:29:07.621 "name": "Nvme$subsystem", 00:29:07.621 "trtype": "$TEST_TRANSPORT", 00:29:07.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:07.621 "adrfam": "ipv4", 00:29:07.621 "trsvcid": "$NVMF_PORT", 00:29:07.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:07.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:07.621 "hdgst": ${hdgst:-false}, 00:29:07.621 "ddgst": ${ddgst:-false} 00:29:07.621 }, 00:29:07.621 "method": "bdev_nvme_attach_controller" 00:29:07.621 } 00:29:07.621 EOF 00:29:07.621 )") 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:07.621 13:53:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:07.621 "params": { 00:29:07.621 "name": "Nvme1", 00:29:07.621 "trtype": "tcp", 00:29:07.621 "traddr": "10.0.0.2", 00:29:07.621 "adrfam": "ipv4", 00:29:07.621 "trsvcid": "4420", 00:29:07.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:07.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:07.621 "hdgst": false, 00:29:07.621 "ddgst": false 00:29:07.621 }, 00:29:07.621 "method": "bdev_nvme_attach_controller" 00:29:07.621 }' 00:29:07.882 [2024-11-06 13:53:31.029756] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:29:07.882 [2024-11-06 13:53:31.029816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824059 ] 00:29:07.882 [2024-11-06 13:53:31.099426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.882 [2024-11-06 13:53:31.134653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.143 Running I/O for 15 seconds... 00:29:10.467 9419.00 IOPS, 36.79 MiB/s [2024-11-06T12:53:34.107Z] 10288.00 IOPS, 40.19 MiB/s [2024-11-06T12:53:34.107Z] 13:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 823420 00:29:10.731 13:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:10.731 [2024-11-06 13:53:33.992437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.731 [2024-11-06 13:53:33.992477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-06 13:53:33.992498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.731 [2024-11-06 13:53:33.992509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-06 13:53:33.992521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.731 [2024-11-06 13:53:33.992531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-06 13:53:33.992540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.731 [2024-11-06 13:53:33.992548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-06 13:53:33.992558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.731 [2024-11-06 13:53:33.992565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-06 13:53:33.992576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.731 [2024-11-06 13:53:33.992585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-06 13:53:33.992602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.731 [2024-11-06 13:53:33.992613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-06 13:53:33.992623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.992983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.992990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.732 [2024-11-06 13:53:33.993309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-06 13:53:33.993319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-06 13:53:33.993631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.733 [2024-11-06 13:53:33.993987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-06 13:53:33.993997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.734 [2024-11-06 13:53:33.994392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-06 13:53:33.994556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-06 13:53:33.994563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-06 13:53:33.994750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.994759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd46370 is same with the state(6) to be set 00:29:10.735 [2024-11-06 13:53:33.994768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.735 [2024-11-06 13:53:33.994774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.735 [2024-11-06 13:53:33.994780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81952 len:8 PRP1 0x0 PRP2 0x0 00:29:10.735 [2024-11-06 13:53:33.994788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-06 13:53:33.998353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.735 [2024-11-06 13:53:33.998406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.735 [2024-11-06 13:53:33.999085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.735 [2024-11-06 13:53:33.999103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.735 [2024-11-06 13:53:33.999112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.735 [2024-11-06 13:53:33.999332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.735 [2024-11-06 13:53:33.999552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.735 [2024-11-06 13:53:33.999565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.735 [2024-11-06 13:53:33.999573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.735 [2024-11-06 13:53:33.999582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.735 [2024-11-06 13:53:34.012533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.735 [2024-11-06 13:53:34.013200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.735 [2024-11-06 13:53:34.013240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.735 [2024-11-06 13:53:34.013251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.735 [2024-11-06 13:53:34.013491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.735 [2024-11-06 13:53:34.013714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.735 [2024-11-06 13:53:34.013722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.735 [2024-11-06 13:53:34.013730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.735 [2024-11-06 13:53:34.013739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.735 [2024-11-06 13:53:34.026491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.735 [2024-11-06 13:53:34.027119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.735 [2024-11-06 13:53:34.027157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.735 [2024-11-06 13:53:34.027168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.735 [2024-11-06 13:53:34.027406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.735 [2024-11-06 13:53:34.027628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.735 [2024-11-06 13:53:34.027637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.735 [2024-11-06 13:53:34.027645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.735 [2024-11-06 13:53:34.027653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.735 [2024-11-06 13:53:34.040413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.735 [2024-11-06 13:53:34.040969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.735 [2024-11-06 13:53:34.040990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.735 [2024-11-06 13:53:34.040998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.735 [2024-11-06 13:53:34.041217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.735 [2024-11-06 13:53:34.041436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.735 [2024-11-06 13:53:34.041444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.735 [2024-11-06 13:53:34.041451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.735 [2024-11-06 13:53:34.041463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.735 [2024-11-06 13:53:34.054205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.735 [2024-11-06 13:53:34.054783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.735 [2024-11-06 13:53:34.054822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.735 [2024-11-06 13:53:34.054834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.735 [2024-11-06 13:53:34.055073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.735 [2024-11-06 13:53:34.055296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.735 [2024-11-06 13:53:34.055305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.735 [2024-11-06 13:53:34.055312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.735 [2024-11-06 13:53:34.055321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.735 [2024-11-06 13:53:34.068098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.735 [2024-11-06 13:53:34.068602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.735 [2024-11-06 13:53:34.068621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.735 [2024-11-06 13:53:34.068629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.735 [2024-11-06 13:53:34.068853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.735 [2024-11-06 13:53:34.069082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.735 [2024-11-06 13:53:34.069092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.735 [2024-11-06 13:53:34.069100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.735 [2024-11-06 13:53:34.069107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.735 [2024-11-06 13:53:34.082048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.735 [2024-11-06 13:53:34.082544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.735 [2024-11-06 13:53:34.082561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.735 [2024-11-06 13:53:34.082569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.736 [2024-11-06 13:53:34.082793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.736 [2024-11-06 13:53:34.083013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.736 [2024-11-06 13:53:34.083021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.736 [2024-11-06 13:53:34.083028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.736 [2024-11-06 13:53:34.083035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.736 [2024-11-06 13:53:34.095985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.736 [2024-11-06 13:53:34.096512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.736 [2024-11-06 13:53:34.096533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.736 [2024-11-06 13:53:34.096541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.736 [2024-11-06 13:53:34.096766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.736 [2024-11-06 13:53:34.096985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.736 [2024-11-06 13:53:34.096994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.736 [2024-11-06 13:53:34.097001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.736 [2024-11-06 13:53:34.097008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.998 [2024-11-06 13:53:34.109958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.998 [2024-11-06 13:53:34.110488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-11-06 13:53:34.110503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.998 [2024-11-06 13:53:34.110511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.998 [2024-11-06 13:53:34.110729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.998 [2024-11-06 13:53:34.110952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.998 [2024-11-06 13:53:34.110961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.998 [2024-11-06 13:53:34.110968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.998 [2024-11-06 13:53:34.110975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.998 [2024-11-06 13:53:34.123921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.998 [2024-11-06 13:53:34.124449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-11-06 13:53:34.124464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.998 [2024-11-06 13:53:34.124472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.998 [2024-11-06 13:53:34.124690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.998 [2024-11-06 13:53:34.124914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.998 [2024-11-06 13:53:34.124922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.998 [2024-11-06 13:53:34.124929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.998 [2024-11-06 13:53:34.124936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.998 [2024-11-06 13:53:34.137894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.998 [2024-11-06 13:53:34.138412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-11-06 13:53:34.138428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.998 [2024-11-06 13:53:34.138435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.998 [2024-11-06 13:53:34.138657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.998 [2024-11-06 13:53:34.138881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.998 [2024-11-06 13:53:34.138890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.998 [2024-11-06 13:53:34.138897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.998 [2024-11-06 13:53:34.138903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.998 [2024-11-06 13:53:34.151851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.998 [2024-11-06 13:53:34.152371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-11-06 13:53:34.152387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.998 [2024-11-06 13:53:34.152395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.998 [2024-11-06 13:53:34.152613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.998 [2024-11-06 13:53:34.152837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.998 [2024-11-06 13:53:34.152845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.998 [2024-11-06 13:53:34.152853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.998 [2024-11-06 13:53:34.152860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.998 [2024-11-06 13:53:34.165803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.998 [2024-11-06 13:53:34.166289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-11-06 13:53:34.166304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.998 [2024-11-06 13:53:34.166312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.998 [2024-11-06 13:53:34.166530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.998 [2024-11-06 13:53:34.166756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.998 [2024-11-06 13:53:34.166764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.998 [2024-11-06 13:53:34.166772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.998 [2024-11-06 13:53:34.166778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.998 [2024-11-06 13:53:34.179726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.998 [2024-11-06 13:53:34.180263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-11-06 13:53:34.180279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.998 [2024-11-06 13:53:34.180287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.998 [2024-11-06 13:53:34.180505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.998 [2024-11-06 13:53:34.180723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.998 [2024-11-06 13:53:34.180734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.998 [2024-11-06 13:53:34.180741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.998 [2024-11-06 13:53:34.180754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.998 [2024-11-06 13:53:34.193698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.998 [2024-11-06 13:53:34.194228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-11-06 13:53:34.194245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.998 [2024-11-06 13:53:34.194252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.998 [2024-11-06 13:53:34.194470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.998 [2024-11-06 13:53:34.194688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.998 [2024-11-06 13:53:34.194696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.998 [2024-11-06 13:53:34.194703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.998 [2024-11-06 13:53:34.194710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.998 [2024-11-06 13:53:34.207504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.998 [2024-11-06 13:53:34.208001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-11-06 13:53:34.208019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.998 [2024-11-06 13:53:34.208026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.998 [2024-11-06 13:53:34.208245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.998 [2024-11-06 13:53:34.208464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.998 [2024-11-06 13:53:34.208472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.998 [2024-11-06 13:53:34.208479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.998 [2024-11-06 13:53:34.208485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.998 [2024-11-06 13:53:34.221440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.998 [2024-11-06 13:53:34.221974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.998 [2024-11-06 13:53:34.221991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.998 [2024-11-06 13:53:34.221999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.222217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.222436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.222444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.999 [2024-11-06 13:53:34.222451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.999 [2024-11-06 13:53:34.222458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.999 [2024-11-06 13:53:34.235413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.999 [2024-11-06 13:53:34.235940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-11-06 13:53:34.235956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.999 [2024-11-06 13:53:34.235964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.236182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.236400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.236407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.999 [2024-11-06 13:53:34.236415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.999 [2024-11-06 13:53:34.236421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.999 [2024-11-06 13:53:34.249367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.999 [2024-11-06 13:53:34.249882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-11-06 13:53:34.249922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.999 [2024-11-06 13:53:34.249934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.250175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.250398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.250415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.999 [2024-11-06 13:53:34.250423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.999 [2024-11-06 13:53:34.250432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.999 [2024-11-06 13:53:34.263186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.999 [2024-11-06 13:53:34.263727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-11-06 13:53:34.263753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.999 [2024-11-06 13:53:34.263761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.263981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.264200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.264208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.999 [2024-11-06 13:53:34.264215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.999 [2024-11-06 13:53:34.264223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.999 [2024-11-06 13:53:34.276975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.999 [2024-11-06 13:53:34.277546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-11-06 13:53:34.277568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.999 [2024-11-06 13:53:34.277576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.277801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.278021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.278031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.999 [2024-11-06 13:53:34.278039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.999 [2024-11-06 13:53:34.278047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.999 [2024-11-06 13:53:34.290787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.999 [2024-11-06 13:53:34.291428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-11-06 13:53:34.291466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.999 [2024-11-06 13:53:34.291476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.291714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.291946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.291956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.999 [2024-11-06 13:53:34.291964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.999 [2024-11-06 13:53:34.291972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.999 [2024-11-06 13:53:34.304715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.999 [2024-11-06 13:53:34.305306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-11-06 13:53:34.305326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.999 [2024-11-06 13:53:34.305334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.305553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.305778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.305787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.999 [2024-11-06 13:53:34.305794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.999 [2024-11-06 13:53:34.305801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.999 [2024-11-06 13:53:34.318530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.999 [2024-11-06 13:53:34.319078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-11-06 13:53:34.319095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.999 [2024-11-06 13:53:34.319103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.319326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.319545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.319553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.999 [2024-11-06 13:53:34.319560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.999 [2024-11-06 13:53:34.319567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.999 [2024-11-06 13:53:34.332512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.999 [2024-11-06 13:53:34.333053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-11-06 13:53:34.333070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.999 [2024-11-06 13:53:34.333077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.333296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.333515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.333524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.999 [2024-11-06 13:53:34.333531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.999 [2024-11-06 13:53:34.333537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.999 [2024-11-06 13:53:34.346497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.999 [2024-11-06 13:53:34.347161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.999 [2024-11-06 13:53:34.347199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:10.999 [2024-11-06 13:53:34.347209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:10.999 [2024-11-06 13:53:34.347448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:10.999 [2024-11-06 13:53:34.347670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.999 [2024-11-06 13:53:34.347679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.000 [2024-11-06 13:53:34.347687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.000 [2024-11-06 13:53:34.347695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.000 [2024-11-06 13:53:34.360443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.000 [2024-11-06 13:53:34.361030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.000 [2024-11-06 13:53:34.361050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.000 [2024-11-06 13:53:34.361058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.000 [2024-11-06 13:53:34.361278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.000 [2024-11-06 13:53:34.361497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.000 [2024-11-06 13:53:34.361505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.000 [2024-11-06 13:53:34.361517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.000 [2024-11-06 13:53:34.361524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.262 [2024-11-06 13:53:34.374270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.262 [2024-11-06 13:53:34.374852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-11-06 13:53:34.374870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.262 [2024-11-06 13:53:34.374878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.262 [2024-11-06 13:53:34.375096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.262 [2024-11-06 13:53:34.375315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.262 [2024-11-06 13:53:34.375322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.262 [2024-11-06 13:53:34.375329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.262 [2024-11-06 13:53:34.375336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.262 [2024-11-06 13:53:34.388061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.262 [2024-11-06 13:53:34.388728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-11-06 13:53:34.388773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.262 [2024-11-06 13:53:34.388784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.262 [2024-11-06 13:53:34.389023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.262 [2024-11-06 13:53:34.389245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.262 [2024-11-06 13:53:34.389254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.262 [2024-11-06 13:53:34.389261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.262 [2024-11-06 13:53:34.389269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.262 [2024-11-06 13:53:34.401999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.262 [2024-11-06 13:53:34.402666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-11-06 13:53:34.402704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.262 [2024-11-06 13:53:34.402716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.262 [2024-11-06 13:53:34.402967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.262 [2024-11-06 13:53:34.403192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.262 [2024-11-06 13:53:34.403200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.262 [2024-11-06 13:53:34.403208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.262 [2024-11-06 13:53:34.403216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.262 [2024-11-06 13:53:34.415940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.262 [2024-11-06 13:53:34.416615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-11-06 13:53:34.416653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.262 [2024-11-06 13:53:34.416663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.262 [2024-11-06 13:53:34.416912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.262 [2024-11-06 13:53:34.417135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.262 [2024-11-06 13:53:34.417144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.262 [2024-11-06 13:53:34.417151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.262 [2024-11-06 13:53:34.417159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.262 [2024-11-06 13:53:34.429893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.262 [2024-11-06 13:53:34.430549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-11-06 13:53:34.430586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.262 [2024-11-06 13:53:34.430597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.262 [2024-11-06 13:53:34.430843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.262 [2024-11-06 13:53:34.431067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.262 [2024-11-06 13:53:34.431075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.262 [2024-11-06 13:53:34.431083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.262 [2024-11-06 13:53:34.431091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.262 [2024-11-06 13:53:34.443859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.262 [2024-11-06 13:53:34.444538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-11-06 13:53:34.444576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.262 [2024-11-06 13:53:34.444587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.262 [2024-11-06 13:53:34.444835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.262 [2024-11-06 13:53:34.445059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.262 [2024-11-06 13:53:34.445067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.262 [2024-11-06 13:53:34.445075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.262 [2024-11-06 13:53:34.445083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.262 8863.67 IOPS, 34.62 MiB/s [2024-11-06T12:53:34.638Z] [2024-11-06 13:53:34.457865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.262 [2024-11-06 13:53:34.458489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-11-06 13:53:34.458536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.262 [2024-11-06 13:53:34.458547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.262 [2024-11-06 13:53:34.458794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.262 [2024-11-06 13:53:34.459018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.262 [2024-11-06 13:53:34.459026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.262 [2024-11-06 13:53:34.459033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.262 [2024-11-06 13:53:34.459042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.262 [2024-11-06 13:53:34.471781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.262 [2024-11-06 13:53:34.472454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.262 [2024-11-06 13:53:34.472492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.262 [2024-11-06 13:53:34.472503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.472741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.472973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.472982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.472989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.472998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.485722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.486376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.486414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.263 [2024-11-06 13:53:34.486425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.486662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.486895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.486905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.486913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.486921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.499654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.500247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.500266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.263 [2024-11-06 13:53:34.500274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.500493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.500717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.500725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.500732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.500739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.513479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.513921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.513939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.263 [2024-11-06 13:53:34.513947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.514165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.514383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.514391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.514398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.514405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.527336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.527782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.527803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.263 [2024-11-06 13:53:34.527810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.528031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.528249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.528257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.528264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.528271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.541212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.541887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.541925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.263 [2024-11-06 13:53:34.541936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.542174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.542396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.542405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.542417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.542425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.555160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.555811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.555849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.263 [2024-11-06 13:53:34.555861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.556099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.556321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.556330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.556338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.556345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.569078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.569733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.569778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.263 [2024-11-06 13:53:34.569788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.570026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.570249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.570257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.570265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.570273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.583020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.583700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.583738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.263 [2024-11-06 13:53:34.583758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.583997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.584219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.584228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.584235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.584243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.596969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.597601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.597639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.263 [2024-11-06 13:53:34.597650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.263 [2024-11-06 13:53:34.597898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.263 [2024-11-06 13:53:34.598121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.263 [2024-11-06 13:53:34.598129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.263 [2024-11-06 13:53:34.598137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.263 [2024-11-06 13:53:34.598145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.263 [2024-11-06 13:53:34.610865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.263 [2024-11-06 13:53:34.611545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.263 [2024-11-06 13:53:34.611583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.264 [2024-11-06 13:53:34.611594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.264 [2024-11-06 13:53:34.611844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.264 [2024-11-06 13:53:34.612067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.264 [2024-11-06 13:53:34.612076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.264 [2024-11-06 13:53:34.612083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.264 [2024-11-06 13:53:34.612091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.264 [2024-11-06 13:53:34.624828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.264 [2024-11-06 13:53:34.625503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.264 [2024-11-06 13:53:34.625541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.264 [2024-11-06 13:53:34.625552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.264 [2024-11-06 13:53:34.625799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.264 [2024-11-06 13:53:34.626023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.264 [2024-11-06 13:53:34.626031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.264 [2024-11-06 13:53:34.626039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.264 [2024-11-06 13:53:34.626047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.526 [2024-11-06 13:53:34.638791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.526 [2024-11-06 13:53:34.639267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.526 [2024-11-06 13:53:34.639287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.526 [2024-11-06 13:53:34.639300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.526 [2024-11-06 13:53:34.639519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.526 [2024-11-06 13:53:34.639738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.526 [2024-11-06 13:53:34.639754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.526 [2024-11-06 13:53:34.639762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.526 [2024-11-06 13:53:34.639768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.526 [2024-11-06 13:53:34.652691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.526 [2024-11-06 13:53:34.653265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.526 [2024-11-06 13:53:34.653283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.526 [2024-11-06 13:53:34.653290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.526 [2024-11-06 13:53:34.653509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.526 [2024-11-06 13:53:34.653727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.526 [2024-11-06 13:53:34.653734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.526 [2024-11-06 13:53:34.653741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.526 [2024-11-06 13:53:34.653756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.526 [2024-11-06 13:53:34.666476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.526 [2024-11-06 13:53:34.667053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.526 [2024-11-06 13:53:34.667071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.526 [2024-11-06 13:53:34.667078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.526 [2024-11-06 13:53:34.667298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.526 [2024-11-06 13:53:34.667516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.526 [2024-11-06 13:53:34.667524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.526 [2024-11-06 13:53:34.667532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.526 [2024-11-06 13:53:34.667538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.526 [2024-11-06 13:53:34.680269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.526 [2024-11-06 13:53:34.680765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.526 [2024-11-06 13:53:34.680782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.526 [2024-11-06 13:53:34.680790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.526 [2024-11-06 13:53:34.681009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.526 [2024-11-06 13:53:34.681231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.526 [2024-11-06 13:53:34.681240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.526 [2024-11-06 13:53:34.681247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.526 [2024-11-06 13:53:34.681253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.526 [2024-11-06 13:53:34.694171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.526 [2024-11-06 13:53:34.694694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.526 [2024-11-06 13:53:34.694732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.526 [2024-11-06 13:53:34.694743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.526 [2024-11-06 13:53:34.694990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.526 [2024-11-06 13:53:34.695212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.526 [2024-11-06 13:53:34.695221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.526 [2024-11-06 13:53:34.695229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.526 [2024-11-06 13:53:34.695237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.526 [2024-11-06 13:53:34.707953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.526 [2024-11-06 13:53:34.708618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.526 [2024-11-06 13:53:34.708656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.526 [2024-11-06 13:53:34.708667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.526 [2024-11-06 13:53:34.708914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.526 [2024-11-06 13:53:34.709137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.526 [2024-11-06 13:53:34.709145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.526 [2024-11-06 13:53:34.709153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.526 [2024-11-06 13:53:34.709161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.526 [2024-11-06 13:53:34.721884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.526 [2024-11-06 13:53:34.722448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.526 [2024-11-06 13:53:34.722485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.526 [2024-11-06 13:53:34.722496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.722734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.722967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.722977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.722989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.722997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.735724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.527 [2024-11-06 13:53:34.736399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.527 [2024-11-06 13:53:34.736437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.527 [2024-11-06 13:53:34.736448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.736685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.736918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.736928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.736935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.736943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.749664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.527 [2024-11-06 13:53:34.750343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.527 [2024-11-06 13:53:34.750381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.527 [2024-11-06 13:53:34.750392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.750630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.750863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.750874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.750882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.750890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.763624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.527 [2024-11-06 13:53:34.764297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.527 [2024-11-06 13:53:34.764335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.527 [2024-11-06 13:53:34.764346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.764584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.764815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.764824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.764833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.764841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.777596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.527 [2024-11-06 13:53:34.778157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.527 [2024-11-06 13:53:34.778176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.527 [2024-11-06 13:53:34.778184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.778404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.778622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.778630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.778637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.778644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.791575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.527 [2024-11-06 13:53:34.792193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.527 [2024-11-06 13:53:34.792231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.527 [2024-11-06 13:53:34.792242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.792480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.792703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.792711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.792719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.792727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.805467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.527 [2024-11-06 13:53:34.806083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.527 [2024-11-06 13:53:34.806121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.527 [2024-11-06 13:53:34.806132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.806370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.806593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.806601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.806608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.806617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.819347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.527 [2024-11-06 13:53:34.820040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.527 [2024-11-06 13:53:34.820078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.527 [2024-11-06 13:53:34.820093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.820332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.820554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.820563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.820570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.820579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.833316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.527 [2024-11-06 13:53:34.833898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.527 [2024-11-06 13:53:34.833918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.527 [2024-11-06 13:53:34.833926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.834145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.834363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.834371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.834378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.834385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.847114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.527 [2024-11-06 13:53:34.847683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.527 [2024-11-06 13:53:34.847699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.527 [2024-11-06 13:53:34.847707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.527 [2024-11-06 13:53:34.847931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.527 [2024-11-06 13:53:34.848149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.527 [2024-11-06 13:53:34.848157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.527 [2024-11-06 13:53:34.848164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.527 [2024-11-06 13:53:34.848170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.527 [2024-11-06 13:53:34.860900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.528 [2024-11-06 13:53:34.861561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.528 [2024-11-06 13:53:34.861599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.528 [2024-11-06 13:53:34.861609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.528 [2024-11-06 13:53:34.861860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.528 [2024-11-06 13:53:34.862089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.528 [2024-11-06 13:53:34.862097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.528 [2024-11-06 13:53:34.862105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.528 [2024-11-06 13:53:34.862113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.528 [2024-11-06 13:53:34.874871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.528 [2024-11-06 13:53:34.875560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.528 [2024-11-06 13:53:34.875599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.528 [2024-11-06 13:53:34.875610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.528 [2024-11-06 13:53:34.875859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.528 [2024-11-06 13:53:34.876082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.528 [2024-11-06 13:53:34.876091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.528 [2024-11-06 13:53:34.876098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.528 [2024-11-06 13:53:34.876106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.528 [2024-11-06 13:53:34.888846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.528 [2024-11-06 13:53:34.889516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.528 [2024-11-06 13:53:34.889554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.528 [2024-11-06 13:53:34.889565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.528 [2024-11-06 13:53:34.889811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.528 [2024-11-06 13:53:34.890034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.528 [2024-11-06 13:53:34.890043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.528 [2024-11-06 13:53:34.890051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.528 [2024-11-06 13:53:34.890059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.790 [2024-11-06 13:53:34.902786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.790 [2024-11-06 13:53:34.903417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-06 13:53:34.903455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.790 [2024-11-06 13:53:34.903466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.790 [2024-11-06 13:53:34.903704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.790 [2024-11-06 13:53:34.903935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.790 [2024-11-06 13:53:34.903945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.790 [2024-11-06 13:53:34.903957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.790 [2024-11-06 13:53:34.903966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.790 [2024-11-06 13:53:34.916688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.790 [2024-11-06 13:53:34.917322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-06 13:53:34.917361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.790 [2024-11-06 13:53:34.917371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.790 [2024-11-06 13:53:34.917609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.790 [2024-11-06 13:53:34.917841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.790 [2024-11-06 13:53:34.917851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.790 [2024-11-06 13:53:34.917859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.790 [2024-11-06 13:53:34.917867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.790 [2024-11-06 13:53:34.930608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.790 [2024-11-06 13:53:34.931161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-06 13:53:34.931181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.790 [2024-11-06 13:53:34.931189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.790 [2024-11-06 13:53:34.931407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.790 [2024-11-06 13:53:34.931626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.790 [2024-11-06 13:53:34.931635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.790 [2024-11-06 13:53:34.931642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.790 [2024-11-06 13:53:34.931649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.790 [2024-11-06 13:53:34.944588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.790 [2024-11-06 13:53:34.945134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-06 13:53:34.945151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.790 [2024-11-06 13:53:34.945159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.790 [2024-11-06 13:53:34.945377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.790 [2024-11-06 13:53:34.945595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.790 [2024-11-06 13:53:34.945603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.790 [2024-11-06 13:53:34.945610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.790 [2024-11-06 13:53:34.945617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.790 [2024-11-06 13:53:34.958542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.790 [2024-11-06 13:53:34.959135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-06 13:53:34.959151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.790 [2024-11-06 13:53:34.959158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.790 [2024-11-06 13:53:34.959376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.790 [2024-11-06 13:53:34.959595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.790 [2024-11-06 13:53:34.959602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.790 [2024-11-06 13:53:34.959610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.790 [2024-11-06 13:53:34.959616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.790 [2024-11-06 13:53:34.972345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.790 [2024-11-06 13:53:34.972984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:34.973022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:34.973033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:34.973271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:34.973493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:34.973501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:34.973509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:34.973517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:34.986246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:34.986848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:34.986886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:34.986896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:34.987134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:34.987357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:34.987365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:34.987373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:34.987381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:35.000114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:35.000812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:35.000851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:35.000868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:35.001107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:35.001329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:35.001338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:35.001346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:35.001355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:35.014086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:35.014691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:35.014729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:35.014742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:35.014990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:35.015213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:35.015222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:35.015230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:35.015238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:35.028044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:35.028710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:35.028756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:35.028769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:35.029010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:35.029233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:35.029241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:35.029249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:35.029257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:35.041995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:35.042668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:35.042706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:35.042717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:35.042964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:35.043192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:35.043201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:35.043209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:35.043217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:35.055944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:35.056617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:35.056655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:35.056665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:35.056912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:35.057136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:35.057144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:35.057152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:35.057160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:35.069884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:35.070431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:35.070469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:35.070480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:35.070718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:35.070950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:35.070959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:35.070967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:35.070975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:35.083706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:35.084359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:35.084397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:35.084408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:35.084646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:35.084878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:35.084888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:35.084896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:35.084908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:35.097513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:35.098048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-06 13:53:35.098068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.791 [2024-11-06 13:53:35.098076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.791 [2024-11-06 13:53:35.098295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.791 [2024-11-06 13:53:35.098514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.791 [2024-11-06 13:53:35.098523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.791 [2024-11-06 13:53:35.098531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.791 [2024-11-06 13:53:35.098539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.791 [2024-11-06 13:53:35.111472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.791 [2024-11-06 13:53:35.112011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-06 13:53:35.112029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.792 [2024-11-06 13:53:35.112037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.792 [2024-11-06 13:53:35.112256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.792 [2024-11-06 13:53:35.112474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.792 [2024-11-06 13:53:35.112482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.792 [2024-11-06 13:53:35.112489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.792 [2024-11-06 13:53:35.112495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.792 [2024-11-06 13:53:35.125424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.792 [2024-11-06 13:53:35.126067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-06 13:53:35.126105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.792 [2024-11-06 13:53:35.126118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.792 [2024-11-06 13:53:35.126355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.792 [2024-11-06 13:53:35.126578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.792 [2024-11-06 13:53:35.126587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.792 [2024-11-06 13:53:35.126594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.792 [2024-11-06 13:53:35.126602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.792 [2024-11-06 13:53:35.139353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.792 [2024-11-06 13:53:35.140054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-06 13:53:35.140092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.792 [2024-11-06 13:53:35.140103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.792 [2024-11-06 13:53:35.140340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.792 [2024-11-06 13:53:35.140563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.792 [2024-11-06 13:53:35.140571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.792 [2024-11-06 13:53:35.140579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.792 [2024-11-06 13:53:35.140587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.792 [2024-11-06 13:53:35.153318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.792 [2024-11-06 13:53:35.154035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-06 13:53:35.154073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:11.792 [2024-11-06 13:53:35.154084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:11.792 [2024-11-06 13:53:35.154322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:11.792 [2024-11-06 13:53:35.154544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.792 [2024-11-06 13:53:35.154553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.792 [2024-11-06 13:53:35.154561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.792 [2024-11-06 13:53:35.154569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.054 [2024-11-06 13:53:35.167302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.054 [2024-11-06 13:53:35.167852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.054 [2024-11-06 13:53:35.167872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.054 [2024-11-06 13:53:35.167881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.054 [2024-11-06 13:53:35.168100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.054 [2024-11-06 13:53:35.168319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.054 [2024-11-06 13:53:35.168326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.054 [2024-11-06 13:53:35.168334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.054 [2024-11-06 13:53:35.168340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.054 [2024-11-06 13:53:35.181278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.054 [2024-11-06 13:53:35.181852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.054 [2024-11-06 13:53:35.181869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.054 [2024-11-06 13:53:35.181877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.054 [2024-11-06 13:53:35.182101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.054 [2024-11-06 13:53:35.182319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.054 [2024-11-06 13:53:35.182327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.054 [2024-11-06 13:53:35.182334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.054 [2024-11-06 13:53:35.182340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.054 [2024-11-06 13:53:35.195056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.054 [2024-11-06 13:53:35.195584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.054 [2024-11-06 13:53:35.195600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.054 [2024-11-06 13:53:35.195608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.054 [2024-11-06 13:53:35.195832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.054 [2024-11-06 13:53:35.196051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.054 [2024-11-06 13:53:35.196059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.054 [2024-11-06 13:53:35.196066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.054 [2024-11-06 13:53:35.196073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.054 [2024-11-06 13:53:35.208996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.054 [2024-11-06 13:53:35.209630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.054 [2024-11-06 13:53:35.209668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.054 [2024-11-06 13:53:35.209679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.054 [2024-11-06 13:53:35.209926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.054 [2024-11-06 13:53:35.210149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.054 [2024-11-06 13:53:35.210157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.054 [2024-11-06 13:53:35.210165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.054 [2024-11-06 13:53:35.210173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.054 [2024-11-06 13:53:35.222897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.054 [2024-11-06 13:53:35.223419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.054 [2024-11-06 13:53:35.223456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.054 [2024-11-06 13:53:35.223466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.054 [2024-11-06 13:53:35.223704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.054 [2024-11-06 13:53:35.223938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.054 [2024-11-06 13:53:35.223953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.054 [2024-11-06 13:53:35.223961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.054 [2024-11-06 13:53:35.223969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.054 [2024-11-06 13:53:35.236703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.054 [2024-11-06 13:53:35.237400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.054 [2024-11-06 13:53:35.237438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.054 [2024-11-06 13:53:35.237449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.054 [2024-11-06 13:53:35.237687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.055 [2024-11-06 13:53:35.237919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.055 [2024-11-06 13:53:35.237928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.055 [2024-11-06 13:53:35.237936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.055 [2024-11-06 13:53:35.237944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.055 [2024-11-06 13:53:35.250666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.055 [2024-11-06 13:53:35.251328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.055 [2024-11-06 13:53:35.251366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.055 [2024-11-06 13:53:35.251377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.055 [2024-11-06 13:53:35.251615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.055 [2024-11-06 13:53:35.251846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.055 [2024-11-06 13:53:35.251856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.055 [2024-11-06 13:53:35.251864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.055 [2024-11-06 13:53:35.251873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.055 [2024-11-06 13:53:35.264618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.055 [2024-11-06 13:53:35.265232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.055 [2024-11-06 13:53:35.265271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.055 [2024-11-06 13:53:35.265281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.055 [2024-11-06 13:53:35.265519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.055 [2024-11-06 13:53:35.265741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.055 [2024-11-06 13:53:35.265760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.055 [2024-11-06 13:53:35.265767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.055 [2024-11-06 13:53:35.265780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.055 [2024-11-06 13:53:35.278519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.055 [2024-11-06 13:53:35.279226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.055 [2024-11-06 13:53:35.279264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.055 [2024-11-06 13:53:35.279275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.055 [2024-11-06 13:53:35.279512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.055 [2024-11-06 13:53:35.279735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.055 [2024-11-06 13:53:35.279743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.055 [2024-11-06 13:53:35.279759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.055 [2024-11-06 13:53:35.279767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.055 [2024-11-06 13:53:35.292343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.055 [2024-11-06 13:53:35.293041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.055 [2024-11-06 13:53:35.293079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.055 [2024-11-06 13:53:35.293090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.055 [2024-11-06 13:53:35.293328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.055 [2024-11-06 13:53:35.293551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.055 [2024-11-06 13:53:35.293560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.055 [2024-11-06 13:53:35.293567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.055 [2024-11-06 13:53:35.293575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.055 [2024-11-06 13:53:35.306299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.055 [2024-11-06 13:53:35.306855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.055 [2024-11-06 13:53:35.306875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.055 [2024-11-06 13:53:35.306883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.055 [2024-11-06 13:53:35.307102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.055 [2024-11-06 13:53:35.307320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.055 [2024-11-06 13:53:35.307337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.055 [2024-11-06 13:53:35.307345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.055 [2024-11-06 13:53:35.307352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.055 [2024-11-06 13:53:35.320274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.055 [2024-11-06 13:53:35.320975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.055 [2024-11-06 13:53:35.321013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.055 [2024-11-06 13:53:35.321024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.055 [2024-11-06 13:53:35.321262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.055 [2024-11-06 13:53:35.321485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.055 [2024-11-06 13:53:35.321494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.055 [2024-11-06 13:53:35.321501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.055 [2024-11-06 13:53:35.321509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.055 [2024-11-06 13:53:35.334250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.055 [2024-11-06 13:53:35.334849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.055 [2024-11-06 13:53:35.334887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.055 [2024-11-06 13:53:35.334900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.055 [2024-11-06 13:53:35.335140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.055 [2024-11-06 13:53:35.335363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.055 [2024-11-06 13:53:35.335372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.055 [2024-11-06 13:53:35.335380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.055 [2024-11-06 13:53:35.335388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.055 [2024-11-06 13:53:35.348116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.055 [2024-11-06 13:53:35.348774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.055 [2024-11-06 13:53:35.348813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.055 [2024-11-06 13:53:35.348825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.055 [2024-11-06 13:53:35.349065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.055 [2024-11-06 13:53:35.349287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.055 [2024-11-06 13:53:35.349296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.055 [2024-11-06 13:53:35.349304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.055 [2024-11-06 13:53:35.349313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.055 [2024-11-06 13:53:35.362043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.055 [2024-11-06 13:53:35.362713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.055 [2024-11-06 13:53:35.362757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.055 [2024-11-06 13:53:35.362769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.056 [2024-11-06 13:53:35.363011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.056 [2024-11-06 13:53:35.363234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.056 [2024-11-06 13:53:35.363243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.056 [2024-11-06 13:53:35.363250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.056 [2024-11-06 13:53:35.363259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.056 [2024-11-06 13:53:35.375999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.056 [2024-11-06 13:53:35.376538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.056 [2024-11-06 13:53:35.376558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.056 [2024-11-06 13:53:35.376566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.056 [2024-11-06 13:53:35.376790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.056 [2024-11-06 13:53:35.377010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.056 [2024-11-06 13:53:35.377017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.056 [2024-11-06 13:53:35.377025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.056 [2024-11-06 13:53:35.377032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.056 [2024-11-06 13:53:35.389960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.056 [2024-11-06 13:53:35.390608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.056 [2024-11-06 13:53:35.390646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.056 [2024-11-06 13:53:35.390657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.056 [2024-11-06 13:53:35.390903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.056 [2024-11-06 13:53:35.391127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.056 [2024-11-06 13:53:35.391135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.056 [2024-11-06 13:53:35.391143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.056 [2024-11-06 13:53:35.391151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.056 [2024-11-06 13:53:35.403915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.056 [2024-11-06 13:53:35.404592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.056 [2024-11-06 13:53:35.404630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.056 [2024-11-06 13:53:35.404642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.056 [2024-11-06 13:53:35.404888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.056 [2024-11-06 13:53:35.405112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.056 [2024-11-06 13:53:35.405126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.056 [2024-11-06 13:53:35.405133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.056 [2024-11-06 13:53:35.405141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.056 [2024-11-06 13:53:35.417872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.056 [2024-11-06 13:53:35.418459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.056 [2024-11-06 13:53:35.418478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.056 [2024-11-06 13:53:35.418486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.056 [2024-11-06 13:53:35.418705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.056 [2024-11-06 13:53:35.418929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.056 [2024-11-06 13:53:35.418938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.056 [2024-11-06 13:53:35.418945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.056 [2024-11-06 13:53:35.418952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.318 [2024-11-06 13:53:35.431672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.318 [2024-11-06 13:53:35.432255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-11-06 13:53:35.432271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.318 [2024-11-06 13:53:35.432279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.318 [2024-11-06 13:53:35.432497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.318 [2024-11-06 13:53:35.432716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.318 [2024-11-06 13:53:35.432723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.318 [2024-11-06 13:53:35.432730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.318 [2024-11-06 13:53:35.432737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.318 [2024-11-06 13:53:35.445472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.318 [2024-11-06 13:53:35.446079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-11-06 13:53:35.446117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.446129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.446371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.446593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.446602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.446610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.319 [2024-11-06 13:53:35.446623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.319 6647.75 IOPS, 25.97 MiB/s [2024-11-06T12:53:35.695Z] [2024-11-06 13:53:35.460019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.319 [2024-11-06 13:53:35.460670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-11-06 13:53:35.460708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.460720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.460968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.461192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.461201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.461208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.319 [2024-11-06 13:53:35.461216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.319 [2024-11-06 13:53:35.473996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.319 [2024-11-06 13:53:35.474604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-11-06 13:53:35.474624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.474632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.474858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.475078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.475086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.475093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.319 [2024-11-06 13:53:35.475100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.319 [2024-11-06 13:53:35.487875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.319 [2024-11-06 13:53:35.488476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-11-06 13:53:35.488513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.488524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.488772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.488996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.489004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.489012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.319 [2024-11-06 13:53:35.489020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.319 [2024-11-06 13:53:35.501754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.319 [2024-11-06 13:53:35.502381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-11-06 13:53:35.502420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.502431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.502668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.502899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.502908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.502916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.319 [2024-11-06 13:53:35.502924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.319 [2024-11-06 13:53:35.515669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.319 [2024-11-06 13:53:35.516216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-11-06 13:53:35.516235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.516243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.516462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.516681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.516690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.516698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.319 [2024-11-06 13:53:35.516705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.319 [2024-11-06 13:53:35.529641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.319 [2024-11-06 13:53:35.530271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-11-06 13:53:35.530311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.530322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.530560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.530790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.530800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.530808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.319 [2024-11-06 13:53:35.530816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.319 [2024-11-06 13:53:35.543553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.319 [2024-11-06 13:53:35.544234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-11-06 13:53:35.544272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.544284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.544530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.544760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.544770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.544778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.319 [2024-11-06 13:53:35.544786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.319 [2024-11-06 13:53:35.557518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.319 [2024-11-06 13:53:35.558071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-11-06 13:53:35.558090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.558098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.558317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.558535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.558551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.558558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.319 [2024-11-06 13:53:35.558565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.319 [2024-11-06 13:53:35.571497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.319 [2024-11-06 13:53:35.572107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-11-06 13:53:35.572145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.319 [2024-11-06 13:53:35.572156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.319 [2024-11-06 13:53:35.572394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.319 [2024-11-06 13:53:35.572616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.319 [2024-11-06 13:53:35.572625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.319 [2024-11-06 13:53:35.572633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.320 [2024-11-06 13:53:35.572641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.320 [2024-11-06 13:53:35.585387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.320 [2024-11-06 13:53:35.586110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-11-06 13:53:35.586148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.320 [2024-11-06 13:53:35.586159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.320 [2024-11-06 13:53:35.586397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.320 [2024-11-06 13:53:35.586620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.320 [2024-11-06 13:53:35.586633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.320 [2024-11-06 13:53:35.586641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.320 [2024-11-06 13:53:35.586649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.320 [2024-11-06 13:53:35.599181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.320 [2024-11-06 13:53:35.599781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-11-06 13:53:35.599801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.320 [2024-11-06 13:53:35.599809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.320 [2024-11-06 13:53:35.600029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.320 [2024-11-06 13:53:35.600247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.320 [2024-11-06 13:53:35.600255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.320 [2024-11-06 13:53:35.600263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.320 [2024-11-06 13:53:35.600270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.320 [2024-11-06 13:53:35.612993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.320 [2024-11-06 13:53:35.613557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-11-06 13:53:35.613574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.320 [2024-11-06 13:53:35.613583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.320 [2024-11-06 13:53:35.613807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.320 [2024-11-06 13:53:35.614026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.320 [2024-11-06 13:53:35.614034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.320 [2024-11-06 13:53:35.614042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.320 [2024-11-06 13:53:35.614049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.320 [2024-11-06 13:53:35.626975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.320 [2024-11-06 13:53:35.627523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-11-06 13:53:35.627538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.320 [2024-11-06 13:53:35.627546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.320 [2024-11-06 13:53:35.627770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.320 [2024-11-06 13:53:35.627989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.320 [2024-11-06 13:53:35.627997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.320 [2024-11-06 13:53:35.628004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.320 [2024-11-06 13:53:35.628016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.320 [2024-11-06 13:53:35.640957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.320 [2024-11-06 13:53:35.641587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-11-06 13:53:35.641625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.320 [2024-11-06 13:53:35.641636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.320 [2024-11-06 13:53:35.641881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.320 [2024-11-06 13:53:35.642105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.320 [2024-11-06 13:53:35.642114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.320 [2024-11-06 13:53:35.642121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.320 [2024-11-06 13:53:35.642129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.320 [2024-11-06 13:53:35.655068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.320 [2024-11-06 13:53:35.655625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-11-06 13:53:35.655645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.320 [2024-11-06 13:53:35.655653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.320 [2024-11-06 13:53:35.655879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.320 [2024-11-06 13:53:35.656098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.320 [2024-11-06 13:53:35.656106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.320 [2024-11-06 13:53:35.656113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.320 [2024-11-06 13:53:35.656121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.320 [2024-11-06 13:53:35.669121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.320 [2024-11-06 13:53:35.669815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-11-06 13:53:35.669853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.320 [2024-11-06 13:53:35.669865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.320 [2024-11-06 13:53:35.670107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.320 [2024-11-06 13:53:35.670330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.320 [2024-11-06 13:53:35.670339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.320 [2024-11-06 13:53:35.670346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.320 [2024-11-06 13:53:35.670354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.320 [2024-11-06 13:53:35.682911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.320 [2024-11-06 13:53:35.683589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-11-06 13:53:35.683631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.320 [2024-11-06 13:53:35.683642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.320 [2024-11-06 13:53:35.683888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.320 [2024-11-06 13:53:35.684111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.320 [2024-11-06 13:53:35.684120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.320 [2024-11-06 13:53:35.684127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.320 [2024-11-06 13:53:35.684135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.582 [2024-11-06 13:53:35.696867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.582 [2024-11-06 13:53:35.697413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-11-06 13:53:35.697432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.582 [2024-11-06 13:53:35.697440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.582 [2024-11-06 13:53:35.697659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.582 [2024-11-06 13:53:35.697884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.582 [2024-11-06 13:53:35.697893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.582 [2024-11-06 13:53:35.697900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.582 [2024-11-06 13:53:35.697908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.582 [2024-11-06 13:53:35.710840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.582 [2024-11-06 13:53:35.711418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.582 [2024-11-06 13:53:35.711434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.582 [2024-11-06 13:53:35.711442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.582 [2024-11-06 13:53:35.711661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.582 [2024-11-06 13:53:35.711885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.582 [2024-11-06 13:53:35.711893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.582 [2024-11-06 13:53:35.711901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.582 [2024-11-06 13:53:35.711908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.582 [2024-11-06 13:53:35.724623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.582 [2024-11-06 13:53:35.725173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-11-06 13:53:35.725190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.583 [2024-11-06 13:53:35.725197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.583 [2024-11-06 13:53:35.725420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.583 [2024-11-06 13:53:35.725639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.583 [2024-11-06 13:53:35.725646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.583 [2024-11-06 13:53:35.725653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.583 [2024-11-06 13:53:35.725660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.583 [2024-11-06 13:53:35.738604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.583 [2024-11-06 13:53:35.739136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-11-06 13:53:35.739154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.583 [2024-11-06 13:53:35.739161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.583 [2024-11-06 13:53:35.739379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.583 [2024-11-06 13:53:35.739597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.583 [2024-11-06 13:53:35.739606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.583 [2024-11-06 13:53:35.739614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.583 [2024-11-06 13:53:35.739621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.583 [2024-11-06 13:53:35.752548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.583 [2024-11-06 13:53:35.752982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-11-06 13:53:35.752998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.583 [2024-11-06 13:53:35.753005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.583 [2024-11-06 13:53:35.753223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.583 [2024-11-06 13:53:35.753441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.583 [2024-11-06 13:53:35.753450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.583 [2024-11-06 13:53:35.753457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.583 [2024-11-06 13:53:35.753464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.583 [2024-11-06 13:53:35.766393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.583 [2024-11-06 13:53:35.766888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-11-06 13:53:35.766905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.583 [2024-11-06 13:53:35.766912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.583 [2024-11-06 13:53:35.767131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.583 [2024-11-06 13:53:35.767349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.583 [2024-11-06 13:53:35.767361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.583 [2024-11-06 13:53:35.767368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.583 [2024-11-06 13:53:35.767375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.583 [2024-11-06 13:53:35.780316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.583 [2024-11-06 13:53:35.780983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-11-06 13:53:35.781021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.583 [2024-11-06 13:53:35.781032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.583 [2024-11-06 13:53:35.781270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.583 [2024-11-06 13:53:35.781493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.583 [2024-11-06 13:53:35.781501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.583 [2024-11-06 13:53:35.781509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.583 [2024-11-06 13:53:35.781517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.583 [2024-11-06 13:53:35.794252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.583 [2024-11-06 13:53:35.794836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-11-06 13:53:35.794875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.583 [2024-11-06 13:53:35.794886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.583 [2024-11-06 13:53:35.795123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.583 [2024-11-06 13:53:35.795346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.583 [2024-11-06 13:53:35.795354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.583 [2024-11-06 13:53:35.795362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.583 [2024-11-06 13:53:35.795370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.583 [2024-11-06 13:53:35.808105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.583 [2024-11-06 13:53:35.808742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-11-06 13:53:35.808787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.583 [2024-11-06 13:53:35.808799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.583 [2024-11-06 13:53:35.809038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.583 [2024-11-06 13:53:35.809261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.583 [2024-11-06 13:53:35.809270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.583 [2024-11-06 13:53:35.809277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.583 [2024-11-06 13:53:35.809285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.583 [2024-11-06 13:53:35.822029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.583 [2024-11-06 13:53:35.822685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-11-06 13:53:35.822724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.583 [2024-11-06 13:53:35.822734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.583 [2024-11-06 13:53:35.822980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.583 [2024-11-06 13:53:35.823203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.583 [2024-11-06 13:53:35.823211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.583 [2024-11-06 13:53:35.823220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.583 [2024-11-06 13:53:35.823228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.583 [2024-11-06 13:53:35.835968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.583 [2024-11-06 13:53:35.836632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.583 [2024-11-06 13:53:35.836670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.583 [2024-11-06 13:53:35.836681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.583 [2024-11-06 13:53:35.836927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.583 [2024-11-06 13:53:35.837151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.583 [2024-11-06 13:53:35.837159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.583 [2024-11-06 13:53:35.837167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.583 [2024-11-06 13:53:35.837175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.583 [2024-11-06 13:53:35.849911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.583 [2024-11-06 13:53:35.850487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-11-06 13:53:35.850506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.584 [2024-11-06 13:53:35.850515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.584 [2024-11-06 13:53:35.850734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.584 [2024-11-06 13:53:35.850958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.584 [2024-11-06 13:53:35.850967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.584 [2024-11-06 13:53:35.850974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.584 [2024-11-06 13:53:35.850981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.584 [2024-11-06 13:53:35.863699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.584 [2024-11-06 13:53:35.864362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-11-06 13:53:35.864405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.584 [2024-11-06 13:53:35.864418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.584 [2024-11-06 13:53:35.864657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.584 [2024-11-06 13:53:35.864887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.584 [2024-11-06 13:53:35.864897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.584 [2024-11-06 13:53:35.864904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.584 [2024-11-06 13:53:35.864912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.584 [2024-11-06 13:53:35.877652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.584 [2024-11-06 13:53:35.878286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-11-06 13:53:35.878324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.584 [2024-11-06 13:53:35.878335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.584 [2024-11-06 13:53:35.878573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.584 [2024-11-06 13:53:35.878803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.584 [2024-11-06 13:53:35.878813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.584 [2024-11-06 13:53:35.878821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.584 [2024-11-06 13:53:35.878829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.584 [2024-11-06 13:53:35.891553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.584 [2024-11-06 13:53:35.892112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-11-06 13:53:35.892132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.584 [2024-11-06 13:53:35.892140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.584 [2024-11-06 13:53:35.892359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.584 [2024-11-06 13:53:35.892577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.584 [2024-11-06 13:53:35.892585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.584 [2024-11-06 13:53:35.892592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.584 [2024-11-06 13:53:35.892599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.584 [2024-11-06 13:53:35.905531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.584 [2024-11-06 13:53:35.906058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-11-06 13:53:35.906075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.584 [2024-11-06 13:53:35.906082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.584 [2024-11-06 13:53:35.906306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.584 [2024-11-06 13:53:35.906524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.584 [2024-11-06 13:53:35.906533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.584 [2024-11-06 13:53:35.906540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.584 [2024-11-06 13:53:35.906547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.584 [2024-11-06 13:53:35.919480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.584 [2024-11-06 13:53:35.920114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-11-06 13:53:35.920152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.584 [2024-11-06 13:53:35.920163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.584 [2024-11-06 13:53:35.920401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.584 [2024-11-06 13:53:35.920624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.584 [2024-11-06 13:53:35.920632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.584 [2024-11-06 13:53:35.920640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.584 [2024-11-06 13:53:35.920648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.584 [2024-11-06 13:53:35.933389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.584 [2024-11-06 13:53:35.933839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-11-06 13:53:35.933859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.584 [2024-11-06 13:53:35.933867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.584 [2024-11-06 13:53:35.934086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.584 [2024-11-06 13:53:35.934305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.584 [2024-11-06 13:53:35.934313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.584 [2024-11-06 13:53:35.934321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.584 [2024-11-06 13:53:35.934328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.584 [2024-11-06 13:53:35.947271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.584 [2024-11-06 13:53:35.947839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.584 [2024-11-06 13:53:35.947856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.584 [2024-11-06 13:53:35.947864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.584 [2024-11-06 13:53:35.948082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.584 [2024-11-06 13:53:35.948300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.584 [2024-11-06 13:53:35.948309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.584 [2024-11-06 13:53:35.948321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.584 [2024-11-06 13:53:35.948328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.846 [2024-11-06 13:53:35.961051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.846 [2024-11-06 13:53:35.961681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-06 13:53:35.961719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.846 [2024-11-06 13:53:35.961732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.846 [2024-11-06 13:53:35.961981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.846 [2024-11-06 13:53:35.962204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.846 [2024-11-06 13:53:35.962213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.846 [2024-11-06 13:53:35.962220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.846 [2024-11-06 13:53:35.962228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.846 [2024-11-06 13:53:35.974965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.846 [2024-11-06 13:53:35.975591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-06 13:53:35.975629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.846 [2024-11-06 13:53:35.975640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.846 [2024-11-06 13:53:35.975885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.846 [2024-11-06 13:53:35.976119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.846 [2024-11-06 13:53:35.976129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.847 [2024-11-06 13:53:35.976136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.847 [2024-11-06 13:53:35.976144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.847 [2024-11-06 13:53:35.988872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.847 [2024-11-06 13:53:35.989433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-06 13:53:35.989453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.847 [2024-11-06 13:53:35.989460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.847 [2024-11-06 13:53:35.989680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.847 [2024-11-06 13:53:35.989904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.847 [2024-11-06 13:53:35.989912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.847 [2024-11-06 13:53:35.989919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.847 [2024-11-06 13:53:35.989926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.847 [2024-11-06 13:53:36.002655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.847 [2024-11-06 13:53:36.003279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-06 13:53:36.003318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.847 [2024-11-06 13:53:36.003329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.847 [2024-11-06 13:53:36.003567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.847 [2024-11-06 13:53:36.003798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.847 [2024-11-06 13:53:36.003808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.847 [2024-11-06 13:53:36.003815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.847 [2024-11-06 13:53:36.003824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.847 [2024-11-06 13:53:36.016564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.847 [2024-11-06 13:53:36.017187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-06 13:53:36.017225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.847 [2024-11-06 13:53:36.017236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.847 [2024-11-06 13:53:36.017474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.847 [2024-11-06 13:53:36.017697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.847 [2024-11-06 13:53:36.017705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.847 [2024-11-06 13:53:36.017714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.847 [2024-11-06 13:53:36.017722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.847 [2024-11-06 13:53:36.030458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.847 [2024-11-06 13:53:36.031019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-06 13:53:36.031039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.847 [2024-11-06 13:53:36.031047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.847 [2024-11-06 13:53:36.031265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.847 [2024-11-06 13:53:36.031484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.847 [2024-11-06 13:53:36.031492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.847 [2024-11-06 13:53:36.031499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.847 [2024-11-06 13:53:36.031506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.847 [2024-11-06 13:53:36.044239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.847 [2024-11-06 13:53:36.044809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-06 13:53:36.044826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.847 [2024-11-06 13:53:36.044838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.847 [2024-11-06 13:53:36.045057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.847 [2024-11-06 13:53:36.045276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.847 [2024-11-06 13:53:36.045284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.847 [2024-11-06 13:53:36.045291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.847 [2024-11-06 13:53:36.045298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.847 [2024-11-06 13:53:36.058096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.847 [2024-11-06 13:53:36.058671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-06 13:53:36.058688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.847 [2024-11-06 13:53:36.058696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.847 [2024-11-06 13:53:36.058919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.847 [2024-11-06 13:53:36.059138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.847 [2024-11-06 13:53:36.059146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.847 [2024-11-06 13:53:36.059153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.847 [2024-11-06 13:53:36.059160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.847 [2024-11-06 13:53:36.071883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.847 [2024-11-06 13:53:36.072404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-06 13:53:36.072420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.847 [2024-11-06 13:53:36.072428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.847 [2024-11-06 13:53:36.072646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.847 [2024-11-06 13:53:36.072870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.847 [2024-11-06 13:53:36.072880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.847 [2024-11-06 13:53:36.072887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.847 [2024-11-06 13:53:36.072894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.847 [2024-11-06 13:53:36.085827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.847 [2024-11-06 13:53:36.086405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-06 13:53:36.086421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.847 [2024-11-06 13:53:36.086428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.847 [2024-11-06 13:53:36.086647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.847 [2024-11-06 13:53:36.086874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.847 [2024-11-06 13:53:36.086882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.847 [2024-11-06 13:53:36.086890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.847 [2024-11-06 13:53:36.086896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.847 [2024-11-06 13:53:36.099607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.847 [2024-11-06 13:53:36.100155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-06 13:53:36.100193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.847 [2024-11-06 13:53:36.100206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.847 [2024-11-06 13:53:36.100445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.847 [2024-11-06 13:53:36.100667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.847 [2024-11-06 13:53:36.100675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.848 [2024-11-06 13:53:36.100682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.848 [2024-11-06 13:53:36.100690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.848 [2024-11-06 13:53:36.113423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.848 [2024-11-06 13:53:36.114005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-06 13:53:36.114025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.848 [2024-11-06 13:53:36.114033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.848 [2024-11-06 13:53:36.114252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.848 [2024-11-06 13:53:36.114471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.848 [2024-11-06 13:53:36.114479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.848 [2024-11-06 13:53:36.114487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.848 [2024-11-06 13:53:36.114494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.848 [2024-11-06 13:53:36.127306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.848 [2024-11-06 13:53:36.128025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-06 13:53:36.128063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.848 [2024-11-06 13:53:36.128074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.848 [2024-11-06 13:53:36.128312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.848 [2024-11-06 13:53:36.128535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.848 [2024-11-06 13:53:36.128545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.848 [2024-11-06 13:53:36.128558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.848 [2024-11-06 13:53:36.128566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.848 [2024-11-06 13:53:36.141104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.848 [2024-11-06 13:53:36.141731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-06 13:53:36.141775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.848 [2024-11-06 13:53:36.141786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.848 [2024-11-06 13:53:36.142024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.848 [2024-11-06 13:53:36.142246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.848 [2024-11-06 13:53:36.142254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.848 [2024-11-06 13:53:36.142263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.848 [2024-11-06 13:53:36.142271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.848 [2024-11-06 13:53:36.155018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.848 [2024-11-06 13:53:36.155607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-06 13:53:36.155626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.848 [2024-11-06 13:53:36.155634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.848 [2024-11-06 13:53:36.155860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.848 [2024-11-06 13:53:36.156080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.848 [2024-11-06 13:53:36.156088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.848 [2024-11-06 13:53:36.156096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.848 [2024-11-06 13:53:36.156103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.848 [2024-11-06 13:53:36.168821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.848 [2024-11-06 13:53:36.169472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-06 13:53:36.169510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.848 [2024-11-06 13:53:36.169521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.848 [2024-11-06 13:53:36.169766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.848 [2024-11-06 13:53:36.169990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.848 [2024-11-06 13:53:36.169998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.848 [2024-11-06 13:53:36.170006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.848 [2024-11-06 13:53:36.170014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.848 [2024-11-06 13:53:36.182804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.848 [2024-11-06 13:53:36.183408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-06 13:53:36.183427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.848 [2024-11-06 13:53:36.183435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.848 [2024-11-06 13:53:36.183655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.848 [2024-11-06 13:53:36.183881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.848 [2024-11-06 13:53:36.183890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.848 [2024-11-06 13:53:36.183897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.848 [2024-11-06 13:53:36.183904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.848 [2024-11-06 13:53:36.196620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.848 [2024-11-06 13:53:36.197235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-06 13:53:36.197273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.848 [2024-11-06 13:53:36.197284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.848 [2024-11-06 13:53:36.197522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.848 [2024-11-06 13:53:36.197744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.848 [2024-11-06 13:53:36.197765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.848 [2024-11-06 13:53:36.197773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.848 [2024-11-06 13:53:36.197781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.848 [2024-11-06 13:53:36.210503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.848 [2024-11-06 13:53:36.211131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-06 13:53:36.211169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:12.848 [2024-11-06 13:53:36.211180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:12.848 [2024-11-06 13:53:36.211418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:12.848 [2024-11-06 13:53:36.211640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.848 [2024-11-06 13:53:36.211648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.848 [2024-11-06 13:53:36.211656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.848 [2024-11-06 13:53:36.211664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.110 [2024-11-06 13:53:36.224395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.110 [2024-11-06 13:53:36.225071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.225109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.225125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.225363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.111 [2024-11-06 13:53:36.225585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.111 [2024-11-06 13:53:36.225593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.111 [2024-11-06 13:53:36.225601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.111 [2024-11-06 13:53:36.225609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.111 [2024-11-06 13:53:36.238348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.111 [2024-11-06 13:53:36.239018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.239056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.239066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.239304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.111 [2024-11-06 13:53:36.239527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.111 [2024-11-06 13:53:36.239535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.111 [2024-11-06 13:53:36.239543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.111 [2024-11-06 13:53:36.239551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.111 [2024-11-06 13:53:36.252283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.111 [2024-11-06 13:53:36.252976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.253014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.253024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.253262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.111 [2024-11-06 13:53:36.253484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.111 [2024-11-06 13:53:36.253493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.111 [2024-11-06 13:53:36.253501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.111 [2024-11-06 13:53:36.253509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.111 [2024-11-06 13:53:36.266253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.111 [2024-11-06 13:53:36.266874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.266912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.266924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.267165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.111 [2024-11-06 13:53:36.267392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.111 [2024-11-06 13:53:36.267402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.111 [2024-11-06 13:53:36.267409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.111 [2024-11-06 13:53:36.267418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.111 [2024-11-06 13:53:36.280160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.111 [2024-11-06 13:53:36.280860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.280898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.280908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.281146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.111 [2024-11-06 13:53:36.281369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.111 [2024-11-06 13:53:36.281378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.111 [2024-11-06 13:53:36.281386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.111 [2024-11-06 13:53:36.281394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.111 [2024-11-06 13:53:36.294127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.111 [2024-11-06 13:53:36.294718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.294737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.294752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.294971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.111 [2024-11-06 13:53:36.295190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.111 [2024-11-06 13:53:36.295198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.111 [2024-11-06 13:53:36.295205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.111 [2024-11-06 13:53:36.295211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.111 [2024-11-06 13:53:36.307940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.111 [2024-11-06 13:53:36.308546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.308584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.308595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.308841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.111 [2024-11-06 13:53:36.309065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.111 [2024-11-06 13:53:36.309073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.111 [2024-11-06 13:53:36.309085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.111 [2024-11-06 13:53:36.309093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.111 [2024-11-06 13:53:36.321826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.111 [2024-11-06 13:53:36.322495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.322533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.322544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.322791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.111 [2024-11-06 13:53:36.323016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.111 [2024-11-06 13:53:36.323024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.111 [2024-11-06 13:53:36.323032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.111 [2024-11-06 13:53:36.323040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.111 [2024-11-06 13:53:36.335771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.111 [2024-11-06 13:53:36.336298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.336336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.336346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.336584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.111 [2024-11-06 13:53:36.336822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.111 [2024-11-06 13:53:36.336833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.111 [2024-11-06 13:53:36.336841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.111 [2024-11-06 13:53:36.336849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.111 [2024-11-06 13:53:36.349577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.111 [2024-11-06 13:53:36.350215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.111 [2024-11-06 13:53:36.350253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.111 [2024-11-06 13:53:36.350263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.111 [2024-11-06 13:53:36.350501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.350723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.350732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.350739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.350757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.112 [2024-11-06 13:53:36.363488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.112 [2024-11-06 13:53:36.364182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.112 [2024-11-06 13:53:36.364220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.112 [2024-11-06 13:53:36.364231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.112 [2024-11-06 13:53:36.364469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.364691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.364699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.364707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.364715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.112 [2024-11-06 13:53:36.377458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.112 [2024-11-06 13:53:36.378126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.112 [2024-11-06 13:53:36.378164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.112 [2024-11-06 13:53:36.378175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.112 [2024-11-06 13:53:36.378412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.378634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.378643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.378650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.378658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.112 [2024-11-06 13:53:36.391391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.112 [2024-11-06 13:53:36.392059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.112 [2024-11-06 13:53:36.392098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.112 [2024-11-06 13:53:36.392109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.112 [2024-11-06 13:53:36.392346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.392568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.392577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.392584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.392593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.112 [2024-11-06 13:53:36.405322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.112 [2024-11-06 13:53:36.405926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.112 [2024-11-06 13:53:36.405964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.112 [2024-11-06 13:53:36.405979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.112 [2024-11-06 13:53:36.406217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.406439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.406447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.406455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.406463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.112 [2024-11-06 13:53:36.419188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.112 [2024-11-06 13:53:36.419817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.112 [2024-11-06 13:53:36.419855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.112 [2024-11-06 13:53:36.419866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.112 [2024-11-06 13:53:36.420103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.420326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.420334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.420342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.420350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.112 [2024-11-06 13:53:36.433079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.112 [2024-11-06 13:53:36.433774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.112 [2024-11-06 13:53:36.433812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.112 [2024-11-06 13:53:36.433823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.112 [2024-11-06 13:53:36.434061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.434283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.434292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.434299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.434307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.112 [2024-11-06 13:53:36.447048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.112 [2024-11-06 13:53:36.447716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.112 [2024-11-06 13:53:36.447760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.112 [2024-11-06 13:53:36.447772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.112 [2024-11-06 13:53:36.448010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.448238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.448246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.448254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.448262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.112 5318.20 IOPS, 20.77 MiB/s [2024-11-06T12:53:36.488Z] [2024-11-06 13:53:36.462480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.112 [2024-11-06 13:53:36.463123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.112 [2024-11-06 13:53:36.463162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.112 [2024-11-06 13:53:36.463172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.112 [2024-11-06 13:53:36.463410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.463632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.463641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.463648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.463656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.112 [2024-11-06 13:53:36.476386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.112 [2024-11-06 13:53:36.477017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.112 [2024-11-06 13:53:36.477055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.112 [2024-11-06 13:53:36.477065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.112 [2024-11-06 13:53:36.477303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.112 [2024-11-06 13:53:36.477525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.112 [2024-11-06 13:53:36.477533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.112 [2024-11-06 13:53:36.477541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.112 [2024-11-06 13:53:36.477549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.374 [2024-11-06 13:53:36.490320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.374 [2024-11-06 13:53:36.490994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.374 [2024-11-06 13:53:36.491032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.374 [2024-11-06 13:53:36.491043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.374 [2024-11-06 13:53:36.491280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.374 [2024-11-06 13:53:36.491503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.374 [2024-11-06 13:53:36.491511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.374 [2024-11-06 13:53:36.491528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.374 [2024-11-06 13:53:36.491536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.374 [2024-11-06 13:53:36.504263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.374 [2024-11-06 13:53:36.504856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.374 [2024-11-06 13:53:36.504894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.374 [2024-11-06 13:53:36.504907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.374 [2024-11-06 13:53:36.505149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.374 [2024-11-06 13:53:36.505372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.374 [2024-11-06 13:53:36.505380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.505388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.375 [2024-11-06 13:53:36.505396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.375 [2024-11-06 13:53:36.518140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.375 [2024-11-06 13:53:36.518739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.375 [2024-11-06 13:53:36.518784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.375 [2024-11-06 13:53:36.518795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.375 [2024-11-06 13:53:36.519033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.375 [2024-11-06 13:53:36.519255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.375 [2024-11-06 13:53:36.519264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.519271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.375 [2024-11-06 13:53:36.519279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.375 [2024-11-06 13:53:36.532008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.375 [2024-11-06 13:53:36.532630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.375 [2024-11-06 13:53:36.532667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.375 [2024-11-06 13:53:36.532678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.375 [2024-11-06 13:53:36.532926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.375 [2024-11-06 13:53:36.533150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.375 [2024-11-06 13:53:36.533158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.533166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.375 [2024-11-06 13:53:36.533174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.375 [2024-11-06 13:53:36.545908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.375 [2024-11-06 13:53:36.546385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.375 [2024-11-06 13:53:36.546405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.375 [2024-11-06 13:53:36.546413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.375 [2024-11-06 13:53:36.546631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.375 [2024-11-06 13:53:36.546858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.375 [2024-11-06 13:53:36.546867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.546874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.375 [2024-11-06 13:53:36.546881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.375 [2024-11-06 13:53:36.559806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.375 [2024-11-06 13:53:36.560334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.375 [2024-11-06 13:53:36.560350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.375 [2024-11-06 13:53:36.560358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.375 [2024-11-06 13:53:36.560576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.375 [2024-11-06 13:53:36.560800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.375 [2024-11-06 13:53:36.560810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.560817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.375 [2024-11-06 13:53:36.560824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.375 [2024-11-06 13:53:36.573743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.375 [2024-11-06 13:53:36.574380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.375 [2024-11-06 13:53:36.574418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.375 [2024-11-06 13:53:36.574428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.375 [2024-11-06 13:53:36.574667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.375 [2024-11-06 13:53:36.574899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.375 [2024-11-06 13:53:36.574909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.574916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.375 [2024-11-06 13:53:36.574924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.375 [2024-11-06 13:53:36.587657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.375 [2024-11-06 13:53:36.588246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.375 [2024-11-06 13:53:36.588266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.375 [2024-11-06 13:53:36.588278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.375 [2024-11-06 13:53:36.588498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.375 [2024-11-06 13:53:36.588717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.375 [2024-11-06 13:53:36.588724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.588731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.375 [2024-11-06 13:53:36.588738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.375 [2024-11-06 13:53:36.601456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.375 [2024-11-06 13:53:36.602097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.375 [2024-11-06 13:53:36.602135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.375 [2024-11-06 13:53:36.602145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.375 [2024-11-06 13:53:36.602383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.375 [2024-11-06 13:53:36.602605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.375 [2024-11-06 13:53:36.602615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.602623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.375 [2024-11-06 13:53:36.602631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.375 [2024-11-06 13:53:36.615361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.375 [2024-11-06 13:53:36.616044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.375 [2024-11-06 13:53:36.616082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.375 [2024-11-06 13:53:36.616093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.375 [2024-11-06 13:53:36.616330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.375 [2024-11-06 13:53:36.616553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.375 [2024-11-06 13:53:36.616561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.616569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.375 [2024-11-06 13:53:36.616577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.375 [2024-11-06 13:53:36.629306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.375 [2024-11-06 13:53:36.629982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.375 [2024-11-06 13:53:36.630020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.375 [2024-11-06 13:53:36.630031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.375 [2024-11-06 13:53:36.630269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.375 [2024-11-06 13:53:36.630496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.375 [2024-11-06 13:53:36.630505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.375 [2024-11-06 13:53:36.630512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.376 [2024-11-06 13:53:36.630521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.376 [2024-11-06 13:53:36.643262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.376 [2024-11-06 13:53:36.643862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.376 [2024-11-06 13:53:36.643900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.376 [2024-11-06 13:53:36.643911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.376 [2024-11-06 13:53:36.644149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.376 [2024-11-06 13:53:36.644371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.376 [2024-11-06 13:53:36.644380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.376 [2024-11-06 13:53:36.644387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.376 [2024-11-06 13:53:36.644395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.376 [2024-11-06 13:53:36.657329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.376 [2024-11-06 13:53:36.658033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.376 [2024-11-06 13:53:36.658071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.376 [2024-11-06 13:53:36.658082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.376 [2024-11-06 13:53:36.658320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.376 [2024-11-06 13:53:36.658543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.376 [2024-11-06 13:53:36.658551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.376 [2024-11-06 13:53:36.658559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.376 [2024-11-06 13:53:36.658568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.376 [2024-11-06 13:53:36.671309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.376 [2024-11-06 13:53:36.671865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.376 [2024-11-06 13:53:36.671904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.376 [2024-11-06 13:53:36.671914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.376 [2024-11-06 13:53:36.672152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.376 [2024-11-06 13:53:36.672374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.376 [2024-11-06 13:53:36.672382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.376 [2024-11-06 13:53:36.672390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.376 [2024-11-06 13:53:36.672403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.376 [2024-11-06 13:53:36.685143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.376 [2024-11-06 13:53:36.685828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.376 [2024-11-06 13:53:36.685866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.376 [2024-11-06 13:53:36.685879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.376 [2024-11-06 13:53:36.686118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.376 [2024-11-06 13:53:36.686340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.376 [2024-11-06 13:53:36.686348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.376 [2024-11-06 13:53:36.686356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.376 [2024-11-06 13:53:36.686364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.376 [2024-11-06 13:53:36.699109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.376 [2024-11-06 13:53:36.699762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.376 [2024-11-06 13:53:36.699800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.376 [2024-11-06 13:53:36.699812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.376 [2024-11-06 13:53:36.700054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.376 [2024-11-06 13:53:36.700276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.376 [2024-11-06 13:53:36.700284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.376 [2024-11-06 13:53:36.700292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.376 [2024-11-06 13:53:36.700300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.376 [2024-11-06 13:53:36.713038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.376 [2024-11-06 13:53:36.713713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.376 [2024-11-06 13:53:36.713758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.376 [2024-11-06 13:53:36.713772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.376 [2024-11-06 13:53:36.714011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.376 [2024-11-06 13:53:36.714233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.376 [2024-11-06 13:53:36.714242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.376 [2024-11-06 13:53:36.714249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.376 [2024-11-06 13:53:36.714258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.376 [2024-11-06 13:53:36.726986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.376 [2024-11-06 13:53:36.727529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.376 [2024-11-06 13:53:36.727548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.376 [2024-11-06 13:53:36.727556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.376 [2024-11-06 13:53:36.727781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.376 [2024-11-06 13:53:36.728001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.376 [2024-11-06 13:53:36.728010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.376 [2024-11-06 13:53:36.728017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.376 [2024-11-06 13:53:36.728024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.376 [2024-11-06 13:53:36.740951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.376 [2024-11-06 13:53:36.741475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.376 [2024-11-06 13:53:36.741491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.376 [2024-11-06 13:53:36.741499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.376 [2024-11-06 13:53:36.741717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.376 [2024-11-06 13:53:36.741942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.376 [2024-11-06 13:53:36.741952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.376 [2024-11-06 13:53:36.741959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.376 [2024-11-06 13:53:36.741966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.638 [2024-11-06 13:53:36.754940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.638 [2024-11-06 13:53:36.755610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.638 [2024-11-06 13:53:36.755648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.638 [2024-11-06 13:53:36.755659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.638 [2024-11-06 13:53:36.755909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.638 [2024-11-06 13:53:36.756133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.638 [2024-11-06 13:53:36.756141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.638 [2024-11-06 13:53:36.756149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.638 [2024-11-06 13:53:36.756157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.638 [2024-11-06 13:53:36.768911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.638 [2024-11-06 13:53:36.769563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.638 [2024-11-06 13:53:36.769601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.638 [2024-11-06 13:53:36.769617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.638 [2024-11-06 13:53:36.769866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.638 [2024-11-06 13:53:36.770090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.770098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.770106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.770114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.782856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.783531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.639 [2024-11-06 13:53:36.783569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.639 [2024-11-06 13:53:36.783580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.639 [2024-11-06 13:53:36.783825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.639 [2024-11-06 13:53:36.784049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.784057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.784065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.784073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.796802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.797354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.639 [2024-11-06 13:53:36.797373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.639 [2024-11-06 13:53:36.797381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.639 [2024-11-06 13:53:36.797600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.639 [2024-11-06 13:53:36.797827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.797836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.797843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.797850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.810770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.811300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.639 [2024-11-06 13:53:36.811316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.639 [2024-11-06 13:53:36.811324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.639 [2024-11-06 13:53:36.811542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.639 [2024-11-06 13:53:36.811766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.811779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.811786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.811793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.824719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.825160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.639 [2024-11-06 13:53:36.825176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.639 [2024-11-06 13:53:36.825184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.639 [2024-11-06 13:53:36.825402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.639 [2024-11-06 13:53:36.825619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.825627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.825634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.825641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.838575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.839115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.639 [2024-11-06 13:53:36.839131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.639 [2024-11-06 13:53:36.839138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.639 [2024-11-06 13:53:36.839356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.639 [2024-11-06 13:53:36.839574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.839582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.839589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.839595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.852514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.853059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.639 [2024-11-06 13:53:36.853075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.639 [2024-11-06 13:53:36.853083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.639 [2024-11-06 13:53:36.853301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.639 [2024-11-06 13:53:36.853519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.853527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.853534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.853544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.866467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.867104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.639 [2024-11-06 13:53:36.867142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.639 [2024-11-06 13:53:36.867152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.639 [2024-11-06 13:53:36.867390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.639 [2024-11-06 13:53:36.867613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.867621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.867629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.867637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.880381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.881054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.639 [2024-11-06 13:53:36.881092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.639 [2024-11-06 13:53:36.881103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.639 [2024-11-06 13:53:36.881341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.639 [2024-11-06 13:53:36.881564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.881572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.881580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.881588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.894315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.894928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.639 [2024-11-06 13:53:36.894966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.639 [2024-11-06 13:53:36.894977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.639 [2024-11-06 13:53:36.895214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.639 [2024-11-06 13:53:36.895437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.639 [2024-11-06 13:53:36.895445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.639 [2024-11-06 13:53:36.895453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.639 [2024-11-06 13:53:36.895461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.639 [2024-11-06 13:53:36.908191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.639 [2024-11-06 13:53:36.908836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.640 [2024-11-06 13:53:36.908874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.640 [2024-11-06 13:53:36.908885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.640 [2024-11-06 13:53:36.909122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.640 [2024-11-06 13:53:36.909345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.640 [2024-11-06 13:53:36.909354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.640 [2024-11-06 13:53:36.909361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.640 [2024-11-06 13:53:36.909369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.640 [2024-11-06 13:53:36.922096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.640 [2024-11-06 13:53:36.922789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.640 [2024-11-06 13:53:36.922828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.640 [2024-11-06 13:53:36.922840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.640 [2024-11-06 13:53:36.923081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.640 [2024-11-06 13:53:36.923303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.640 [2024-11-06 13:53:36.923311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.640 [2024-11-06 13:53:36.923319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.640 [2024-11-06 13:53:36.923327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.640 [2024-11-06 13:53:36.936077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.640 [2024-11-06 13:53:36.936642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.640 [2024-11-06 13:53:36.936661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.640 [2024-11-06 13:53:36.936669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.640 [2024-11-06 13:53:36.936894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.640 [2024-11-06 13:53:36.937113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.640 [2024-11-06 13:53:36.937121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.640 [2024-11-06 13:53:36.937129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.640 [2024-11-06 13:53:36.937135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.640 [2024-11-06 13:53:36.949862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.640 [2024-11-06 13:53:36.950387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.640 [2024-11-06 13:53:36.950403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.640 [2024-11-06 13:53:36.950410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.640 [2024-11-06 13:53:36.950633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.640 [2024-11-06 13:53:36.950859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.640 [2024-11-06 13:53:36.950868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.640 [2024-11-06 13:53:36.950875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.640 [2024-11-06 13:53:36.950882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.640 [2024-11-06 13:53:36.963799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.640 [2024-11-06 13:53:36.964367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.640 [2024-11-06 13:53:36.964383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.640 [2024-11-06 13:53:36.964390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.640 [2024-11-06 13:53:36.964609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.640 [2024-11-06 13:53:36.964833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.640 [2024-11-06 13:53:36.964841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.640 [2024-11-06 13:53:36.964848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.640 [2024-11-06 13:53:36.964855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.640 [2024-11-06 13:53:36.977772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.640 [2024-11-06 13:53:36.978345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.640 [2024-11-06 13:53:36.978360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.640 [2024-11-06 13:53:36.978368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.640 [2024-11-06 13:53:36.978586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.640 [2024-11-06 13:53:36.978811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.640 [2024-11-06 13:53:36.978820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.640 [2024-11-06 13:53:36.978827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.640 [2024-11-06 13:53:36.978833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 823420 Killed "${NVMF_APP[@]}" "$@" 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.640 [2024-11-06 13:53:36.991553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.640 [2024-11-06 13:53:36.992162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.640 [2024-11-06 13:53:36.992205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.640 [2024-11-06 13:53:36.992216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.640 [2024-11-06 13:53:36.992454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.640 [2024-11-06 13:53:36.992676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.640 [2024-11-06 13:53:36.992685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.640 [2024-11-06 13:53:36.992693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.640 [2024-11-06 13:53:36.992701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=825079 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 825079 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 825079 ']' 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.640 13:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.640 [2024-11-06 13:53:37.005479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.640 [2024-11-06 13:53:37.006012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.640 [2024-11-06 13:53:37.006032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.640 [2024-11-06 13:53:37.006040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.640 [2024-11-06 13:53:37.006260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.640 [2024-11-06 13:53:37.006480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.640 [2024-11-06 13:53:37.006489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.640 [2024-11-06 13:53:37.006497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.640 [2024-11-06 13:53:37.006504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.019465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.020105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.903 [2024-11-06 13:53:37.020143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.903 [2024-11-06 13:53:37.020154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.903 [2024-11-06 13:53:37.020392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.903 [2024-11-06 13:53:37.020620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.903 [2024-11-06 13:53:37.020629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.903 [2024-11-06 13:53:37.020636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.903 [2024-11-06 13:53:37.020644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.033405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.033965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.903 [2024-11-06 13:53:37.034002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.903 [2024-11-06 13:53:37.034014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.903 [2024-11-06 13:53:37.034253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.903 [2024-11-06 13:53:37.034476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.903 [2024-11-06 13:53:37.034485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.903 [2024-11-06 13:53:37.034492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.903 [2024-11-06 13:53:37.034500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.047242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.047848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.903 [2024-11-06 13:53:37.047887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.903 [2024-11-06 13:53:37.047900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.903 [2024-11-06 13:53:37.048139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.903 [2024-11-06 13:53:37.048361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.903 [2024-11-06 13:53:37.048370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.903 [2024-11-06 13:53:37.048377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.903 [2024-11-06 13:53:37.048385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.049985] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:29:13.903 [2024-11-06 13:53:37.050039] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.903 [2024-11-06 13:53:37.061119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.061743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.903 [2024-11-06 13:53:37.061787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.903 [2024-11-06 13:53:37.061798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.903 [2024-11-06 13:53:37.062036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.903 [2024-11-06 13:53:37.062263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.903 [2024-11-06 13:53:37.062272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.903 [2024-11-06 13:53:37.062280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.903 [2024-11-06 13:53:37.062289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.075014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.075558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.903 [2024-11-06 13:53:37.075577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.903 [2024-11-06 13:53:37.075586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.903 [2024-11-06 13:53:37.075845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.903 [2024-11-06 13:53:37.076068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.903 [2024-11-06 13:53:37.076078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.903 [2024-11-06 13:53:37.076086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.903 [2024-11-06 13:53:37.076093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.088914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.089469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.903 [2024-11-06 13:53:37.089506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.903 [2024-11-06 13:53:37.089518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.903 [2024-11-06 13:53:37.089764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.903 [2024-11-06 13:53:37.089988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.903 [2024-11-06 13:53:37.089997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.903 [2024-11-06 13:53:37.090005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.903 [2024-11-06 13:53:37.090013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.102751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.103391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.903 [2024-11-06 13:53:37.103429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.903 [2024-11-06 13:53:37.103440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.903 [2024-11-06 13:53:37.103678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.903 [2024-11-06 13:53:37.103910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.903 [2024-11-06 13:53:37.103919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.903 [2024-11-06 13:53:37.103932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.903 [2024-11-06 13:53:37.103940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.116677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.117358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.903 [2024-11-06 13:53:37.117396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.903 [2024-11-06 13:53:37.117407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.903 [2024-11-06 13:53:37.117645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.903 [2024-11-06 13:53:37.117877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.903 [2024-11-06 13:53:37.117887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.903 [2024-11-06 13:53:37.117895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.903 [2024-11-06 13:53:37.117903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.130631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.131039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.903 [2024-11-06 13:53:37.131060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.903 [2024-11-06 13:53:37.131068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.903 [2024-11-06 13:53:37.131287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.903 [2024-11-06 13:53:37.131506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.903 [2024-11-06 13:53:37.131515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.903 [2024-11-06 13:53:37.131522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.903 [2024-11-06 13:53:37.131529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.903 [2024-11-06 13:53:37.140553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:13.903 [2024-11-06 13:53:37.144474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.903 [2024-11-06 13:53:37.145156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.904 [2024-11-06 13:53:37.145195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.904 [2024-11-06 13:53:37.145206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.904 [2024-11-06 13:53:37.145444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.904 [2024-11-06 13:53:37.145668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.904 [2024-11-06 13:53:37.145677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.904 [2024-11-06 13:53:37.145685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.904 [2024-11-06 13:53:37.145695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.904 [2024-11-06 13:53:37.158449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.904 [2024-11-06 13:53:37.159107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.904 [2024-11-06 13:53:37.159146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.904 [2024-11-06 13:53:37.159157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.904 [2024-11-06 13:53:37.159396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.904 [2024-11-06 13:53:37.159618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.904 [2024-11-06 13:53:37.159627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.904 [2024-11-06 13:53:37.159635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.904 [2024-11-06 13:53:37.159644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.904 [2024-11-06 13:53:37.169780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.904 [2024-11-06 13:53:37.169804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.904 [2024-11-06 13:53:37.169811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.904 [2024-11-06 13:53:37.169816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.904 [2024-11-06 13:53:37.169822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.904 [2024-11-06 13:53:37.170994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.904 [2024-11-06 13:53:37.171148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.904 [2024-11-06 13:53:37.171150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:13.904 [2024-11-06 13:53:37.172387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.904 [2024-11-06 13:53:37.173056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.904 [2024-11-06 13:53:37.173095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.904 [2024-11-06 13:53:37.173106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.904 [2024-11-06 13:53:37.173344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.904 [2024-11-06 13:53:37.173567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.904 [2024-11-06 13:53:37.173575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.904 [2024-11-06 13:53:37.173583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.904 [2024-11-06 13:53:37.173591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.904 [2024-11-06 13:53:37.186370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.904 [2024-11-06 13:53:37.187085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.904 [2024-11-06 13:53:37.187124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.904 [2024-11-06 13:53:37.187137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.904 [2024-11-06 13:53:37.187380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.904 [2024-11-06 13:53:37.187609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.904 [2024-11-06 13:53:37.187618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.904 [2024-11-06 13:53:37.187625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.904 [2024-11-06 13:53:37.187634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.904 [2024-11-06 13:53:37.200180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.904 [2024-11-06 13:53:37.200713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.904 [2024-11-06 13:53:37.200759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.904 [2024-11-06 13:53:37.200772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.904 [2024-11-06 13:53:37.201014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.904 [2024-11-06 13:53:37.201237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.904 [2024-11-06 13:53:37.201245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.904 [2024-11-06 13:53:37.201253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.904 [2024-11-06 13:53:37.201261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.904 [2024-11-06 13:53:37.214009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.904 [2024-11-06 13:53:37.214579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.904 [2024-11-06 13:53:37.214599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.904 [2024-11-06 13:53:37.214607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.904 [2024-11-06 13:53:37.214832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.904 [2024-11-06 13:53:37.215052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.904 [2024-11-06 13:53:37.215060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.904 [2024-11-06 13:53:37.215068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.904 [2024-11-06 13:53:37.215075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.904 [2024-11-06 13:53:37.227795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.904 [2024-11-06 13:53:37.228347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.904 [2024-11-06 13:53:37.228363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.904 [2024-11-06 13:53:37.228371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.904 [2024-11-06 13:53:37.228590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.904 [2024-11-06 13:53:37.228814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.904 [2024-11-06 13:53:37.228823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.904 [2024-11-06 13:53:37.228836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.904 [2024-11-06 13:53:37.228844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.904 [2024-11-06 13:53:37.241575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.904 [2024-11-06 13:53:37.242178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.904 [2024-11-06 13:53:37.242216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.904 [2024-11-06 13:53:37.242228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.904 [2024-11-06 13:53:37.242466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.904 [2024-11-06 13:53:37.242689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.904 [2024-11-06 13:53:37.242697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.904 [2024-11-06 13:53:37.242705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.904 [2024-11-06 13:53:37.242713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.904 [2024-11-06 13:53:37.255445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.904 [2024-11-06 13:53:37.255889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.904 [2024-11-06 13:53:37.255910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.904 [2024-11-06 13:53:37.255918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.904 [2024-11-06 13:53:37.256137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.904 [2024-11-06 13:53:37.256356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.905 [2024-11-06 13:53:37.256363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.905 [2024-11-06 13:53:37.256371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.905 [2024-11-06 13:53:37.256378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.905 [2024-11-06 13:53:37.269309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.905 [2024-11-06 13:53:37.269705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.905 [2024-11-06 13:53:37.269722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:13.905 [2024-11-06 13:53:37.269730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:13.905 [2024-11-06 13:53:37.269955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:13.905 [2024-11-06 13:53:37.270174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.905 [2024-11-06 13:53:37.270182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.905 [2024-11-06 13:53:37.270189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.905 [2024-11-06 13:53:37.270196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.166 [2024-11-06 13:53:37.283143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.166 [2024-11-06 13:53:37.283845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-06 13:53:37.283883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.166 [2024-11-06 13:53:37.283896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.166 [2024-11-06 13:53:37.284137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.166 [2024-11-06 13:53:37.284360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.166 [2024-11-06 13:53:37.284368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.166 [2024-11-06 13:53:37.284377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.166 [2024-11-06 13:53:37.284385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.166 [2024-11-06 13:53:37.297118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.166 [2024-11-06 13:53:37.297810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-06 13:53:37.297848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.166 [2024-11-06 13:53:37.297861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.166 [2024-11-06 13:53:37.298102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.166 [2024-11-06 13:53:37.298325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.166 [2024-11-06 13:53:37.298333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.166 [2024-11-06 13:53:37.298341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.166 [2024-11-06 13:53:37.298349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.166 [2024-11-06 13:53:37.311097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.166 [2024-11-06 13:53:37.311634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-06 13:53:37.311673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.166 [2024-11-06 13:53:37.311685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.166 [2024-11-06 13:53:37.311933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.166 [2024-11-06 13:53:37.312157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.166 [2024-11-06 13:53:37.312165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.166 [2024-11-06 13:53:37.312173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.166 [2024-11-06 13:53:37.312181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.166 [2024-11-06 13:53:37.324915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.325602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-06 13:53:37.325641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.167 [2024-11-06 13:53:37.325657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.167 [2024-11-06 13:53:37.325904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.167 [2024-11-06 13:53:37.326127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.167 [2024-11-06 13:53:37.326136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.167 [2024-11-06 13:53:37.326144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.167 [2024-11-06 13:53:37.326152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.167 [2024-11-06 13:53:37.338735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.339296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-06 13:53:37.339333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.167 [2024-11-06 13:53:37.339346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.167 [2024-11-06 13:53:37.339585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.167 [2024-11-06 13:53:37.339815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.167 [2024-11-06 13:53:37.339825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.167 [2024-11-06 13:53:37.339833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.167 [2024-11-06 13:53:37.339842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.167 [2024-11-06 13:53:37.352569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.353114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-06 13:53:37.353152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.167 [2024-11-06 13:53:37.353165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.167 [2024-11-06 13:53:37.353404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.167 [2024-11-06 13:53:37.353627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.167 [2024-11-06 13:53:37.353635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.167 [2024-11-06 13:53:37.353643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.167 [2024-11-06 13:53:37.353651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.167 [2024-11-06 13:53:37.366381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.366983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-06 13:53:37.367003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.167 [2024-11-06 13:53:37.367011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.167 [2024-11-06 13:53:37.367230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.167 [2024-11-06 13:53:37.367454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.167 [2024-11-06 13:53:37.367462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.167 [2024-11-06 13:53:37.367469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.167 [2024-11-06 13:53:37.367476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.167 [2024-11-06 13:53:37.380201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.380841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-06 13:53:37.380880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.167 [2024-11-06 13:53:37.380892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.167 [2024-11-06 13:53:37.381133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.167 [2024-11-06 13:53:37.381367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.167 [2024-11-06 13:53:37.381377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.167 [2024-11-06 13:53:37.381384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.167 [2024-11-06 13:53:37.381392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.167 [2024-11-06 13:53:37.394126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.394694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-06 13:53:37.394733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.167 [2024-11-06 13:53:37.394752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.167 [2024-11-06 13:53:37.394992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.167 [2024-11-06 13:53:37.395215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.167 [2024-11-06 13:53:37.395224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.167 [2024-11-06 13:53:37.395231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.167 [2024-11-06 13:53:37.395239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.167 [2024-11-06 13:53:37.407971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.408522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-06 13:53:37.408542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.167 [2024-11-06 13:53:37.408550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.167 [2024-11-06 13:53:37.408777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.167 [2024-11-06 13:53:37.408998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.167 [2024-11-06 13:53:37.409007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.167 [2024-11-06 13:53:37.409019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.167 [2024-11-06 13:53:37.409027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.167 [2024-11-06 13:53:37.421957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.422553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-06 13:53:37.422569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.167 [2024-11-06 13:53:37.422577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.167 [2024-11-06 13:53:37.422799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.167 [2024-11-06 13:53:37.423018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.167 [2024-11-06 13:53:37.423026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.167 [2024-11-06 13:53:37.423033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.167 [2024-11-06 13:53:37.423040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.167 [2024-11-06 13:53:37.435774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.436329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-06 13:53:37.436345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.167 [2024-11-06 13:53:37.436352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.167 [2024-11-06 13:53:37.436570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.167 [2024-11-06 13:53:37.436793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.167 [2024-11-06 13:53:37.436802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.167 [2024-11-06 13:53:37.436809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.167 [2024-11-06 13:53:37.436816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.167 [2024-11-06 13:53:37.449743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.167 [2024-11-06 13:53:37.450328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-06 13:53:37.450344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.168 [2024-11-06 13:53:37.450352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.168 [2024-11-06 13:53:37.450570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.168 [2024-11-06 13:53:37.450793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.168 [2024-11-06 13:53:37.450802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.168 [2024-11-06 13:53:37.450809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.168 [2024-11-06 13:53:37.450816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.168 4431.83 IOPS, 17.31 MiB/s [2024-11-06T12:53:37.544Z] [2024-11-06 13:53:37.465035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.168 [2024-11-06 13:53:37.465655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-06 13:53:37.465693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.168 [2024-11-06 13:53:37.465704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.168 [2024-11-06 13:53:37.465950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.168 [2024-11-06 13:53:37.466174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.168 [2024-11-06 13:53:37.466182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.168 [2024-11-06 13:53:37.466190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.168 [2024-11-06 13:53:37.466198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.168 [2024-11-06 13:53:37.478931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.168 [2024-11-06 13:53:37.479533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-06 13:53:37.479570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.168 [2024-11-06 13:53:37.479581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.168 [2024-11-06 13:53:37.479828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.168 [2024-11-06 13:53:37.480051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.168 [2024-11-06 13:53:37.480060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.168 [2024-11-06 13:53:37.480068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.168 [2024-11-06 13:53:37.480076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.168 [2024-11-06 13:53:37.492820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.168 [2024-11-06 13:53:37.493474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-06 13:53:37.493512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.168 [2024-11-06 13:53:37.493524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.168 [2024-11-06 13:53:37.493768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.168 [2024-11-06 13:53:37.493992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.168 [2024-11-06 13:53:37.494000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.168 [2024-11-06 13:53:37.494008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.168 [2024-11-06 13:53:37.494016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.168 [2024-11-06 13:53:37.506749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.168 [2024-11-06 13:53:37.507345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-06 13:53:37.507383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.168 [2024-11-06 13:53:37.507399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.168 [2024-11-06 13:53:37.507637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.168 [2024-11-06 13:53:37.507869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.168 [2024-11-06 13:53:37.507879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.168 [2024-11-06 13:53:37.507888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.168 [2024-11-06 13:53:37.507896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.168 [2024-11-06 13:53:37.520620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.168 [2024-11-06 13:53:37.521025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-06 13:53:37.521046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.168 [2024-11-06 13:53:37.521054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.168 [2024-11-06 13:53:37.521273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.168 [2024-11-06 13:53:37.521492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.168 [2024-11-06 13:53:37.521500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.168 [2024-11-06 13:53:37.521508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.168 [2024-11-06 13:53:37.521516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.168 [2024-11-06 13:53:37.534486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.168 [2024-11-06 13:53:37.534785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-06 13:53:37.534804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.168 [2024-11-06 13:53:37.534812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.168 [2024-11-06 13:53:37.535031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.168 [2024-11-06 13:53:37.535249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.168 [2024-11-06 13:53:37.535257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.168 [2024-11-06 13:53:37.535264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.168 [2024-11-06 13:53:37.535271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.430 [2024-11-06 13:53:37.548423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.430 [2024-11-06 13:53:37.549097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-11-06 13:53:37.549136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.430 [2024-11-06 13:53:37.549147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.430 [2024-11-06 13:53:37.549385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.430 [2024-11-06 13:53:37.549613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.430 [2024-11-06 13:53:37.549622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.430 [2024-11-06 13:53:37.549629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.430 [2024-11-06 13:53:37.549637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.430 [2024-11-06 13:53:37.562374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.430 [2024-11-06 13:53:37.562810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.430 [2024-11-06 13:53:37.562829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.430 [2024-11-06 13:53:37.562837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.563057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.563275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.563283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.563290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.563297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.576229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.577001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.577039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.431 [2024-11-06 13:53:37.577051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.577291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.577514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.577523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.577530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.577538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.590082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.590635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.590654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.431 [2024-11-06 13:53:37.590662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.590888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.591108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.591117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.591129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.591136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.604066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.604597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.604635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.431 [2024-11-06 13:53:37.604645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.604894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.605117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.605125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.605133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.605141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.617870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.618428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.618466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.431 [2024-11-06 13:53:37.618478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.618716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.618948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.618957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.618965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.618973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.631702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.632368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.632406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.431 [2024-11-06 13:53:37.632417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.632656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.632886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.632896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.632904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.632912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.645655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.646259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.646279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.431 [2024-11-06 13:53:37.646287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.646506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.646724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.646732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.646739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.646750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.659476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.659996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.660014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.431 [2024-11-06 13:53:37.660022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.660241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.660460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.660468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.660475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.660481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.673415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.674092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.674130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.431 [2024-11-06 13:53:37.674141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.674379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.674602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.674610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.674618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.674627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.687376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.688037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.688076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.431 [2024-11-06 13:53:37.688093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.431 [2024-11-06 13:53:37.688335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.431 [2024-11-06 13:53:37.688557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.431 [2024-11-06 13:53:37.688566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.431 [2024-11-06 13:53:37.688574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.431 [2024-11-06 13:53:37.688582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.431 [2024-11-06 13:53:37.701335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.431 [2024-11-06 13:53:37.701780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.431 [2024-11-06 13:53:37.701800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.432 [2024-11-06 13:53:37.701809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.432 [2024-11-06 13:53:37.702028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.432 [2024-11-06 13:53:37.702246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.432 [2024-11-06 13:53:37.702254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.432 [2024-11-06 13:53:37.702261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.432 [2024-11-06 13:53:37.702268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.432 [2024-11-06 13:53:37.715204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.432 [2024-11-06 13:53:37.715743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-11-06 13:53:37.715794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.432 [2024-11-06 13:53:37.715801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.432 [2024-11-06 13:53:37.716020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.432 [2024-11-06 13:53:37.716238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.432 [2024-11-06 13:53:37.716247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.432 [2024-11-06 13:53:37.716254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.432 [2024-11-06 13:53:37.716261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.432 [2024-11-06 13:53:37.728987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.432 [2024-11-06 13:53:37.729655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-11-06 13:53:37.729694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.432 [2024-11-06 13:53:37.729705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.432 [2024-11-06 13:53:37.729953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.432 [2024-11-06 13:53:37.730181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.432 [2024-11-06 13:53:37.730191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.432 [2024-11-06 13:53:37.730199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.432 [2024-11-06 13:53:37.730207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.432 [2024-11-06 13:53:37.742953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.432 [2024-11-06 13:53:37.743443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-11-06 13:53:37.743480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.432 [2024-11-06 13:53:37.743491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.432 [2024-11-06 13:53:37.743729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.432 [2024-11-06 13:53:37.743960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.432 [2024-11-06 13:53:37.743969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.432 [2024-11-06 13:53:37.743977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.432 [2024-11-06 13:53:37.743985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.432 [2024-11-06 13:53:37.756925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.432 [2024-11-06 13:53:37.757432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-11-06 13:53:37.757470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.432 [2024-11-06 13:53:37.757482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.432 [2024-11-06 13:53:37.757720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.432 [2024-11-06 13:53:37.757952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.432 [2024-11-06 13:53:37.757962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.432 [2024-11-06 13:53:37.757970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.432 [2024-11-06 13:53:37.757978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.432 [2024-11-06 13:53:37.770710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.432 [2024-11-06 13:53:37.771278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-11-06 13:53:37.771299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.432 [2024-11-06 13:53:37.771308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.432 [2024-11-06 13:53:37.771527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.432 [2024-11-06 13:53:37.771751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.432 [2024-11-06 13:53:37.771760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.432 [2024-11-06 13:53:37.771772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.432 [2024-11-06 13:53:37.771780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.432 [2024-11-06 13:53:37.784514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.432 [2024-11-06 13:53:37.785081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-11-06 13:53:37.785098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.432 [2024-11-06 13:53:37.785107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.432 [2024-11-06 13:53:37.785326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.432 [2024-11-06 13:53:37.785545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.432 [2024-11-06 13:53:37.785553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.432 [2024-11-06 13:53:37.785560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.432 [2024-11-06 13:53:37.785568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.432 [2024-11-06 13:53:37.798295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.432 [2024-11-06 13:53:37.798840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.432 [2024-11-06 13:53:37.798879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.432 [2024-11-06 13:53:37.798891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.432 [2024-11-06 13:53:37.799132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.432 [2024-11-06 13:53:37.799354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.432 [2024-11-06 13:53:37.799363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.432 [2024-11-06 13:53:37.799371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.432 [2024-11-06 13:53:37.799379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.694 [2024-11-06 13:53:37.812112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.694 [2024-11-06 13:53:37.812704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-11-06 13:53:37.812724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.694 [2024-11-06 13:53:37.812732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.694 [2024-11-06 13:53:37.812958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.694 [2024-11-06 13:53:37.813177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.694 [2024-11-06 13:53:37.813184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.694 [2024-11-06 13:53:37.813192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.694 [2024-11-06 13:53:37.813199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.694 [2024-11-06 13:53:37.825918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.694 [2024-11-06 13:53:37.826560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-11-06 13:53:37.826598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.694 [2024-11-06 13:53:37.826609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.694 [2024-11-06 13:53:37.826855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.694 [2024-11-06 13:53:37.827079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.694 [2024-11-06 13:53:37.827087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.694 [2024-11-06 13:53:37.827095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.694 [2024-11-06 13:53:37.827103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.694 [2024-11-06 13:53:37.839852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.694 [2024-11-06 13:53:37.840445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-11-06 13:53:37.840483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.694 [2024-11-06 13:53:37.840494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.694 [2024-11-06 13:53:37.840732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.694 [2024-11-06 13:53:37.840963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.694 [2024-11-06 13:53:37.840972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.694 [2024-11-06 13:53:37.840980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.694 [2024-11-06 13:53:37.840988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.694 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:14.694 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:14.694 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:14.694 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:14.694 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.694 [2024-11-06 13:53:37.853787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.694 [2024-11-06 13:53:37.854417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-11-06 13:53:37.854455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.694 [2024-11-06 13:53:37.854466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.694 [2024-11-06 13:53:37.854704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.694 [2024-11-06 13:53:37.854936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.694 [2024-11-06 13:53:37.854946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.694 [2024-11-06 13:53:37.854954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.694 [2024-11-06 13:53:37.854962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.694 [2024-11-06 13:53:37.867696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.694 [2024-11-06 13:53:37.868373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-11-06 13:53:37.868411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.694 [2024-11-06 13:53:37.868422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.694 [2024-11-06 13:53:37.868660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.694 [2024-11-06 13:53:37.868891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.694 [2024-11-06 13:53:37.868901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.694 [2024-11-06 13:53:37.868909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.694 [2024-11-06 13:53:37.868917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.694 [2024-11-06 13:53:37.881647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.694 [2024-11-06 13:53:37.882233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-11-06 13:53:37.882272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.694 [2024-11-06 13:53:37.882284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.694 [2024-11-06 13:53:37.882524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.695 [2024-11-06 13:53:37.882756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.695 [2024-11-06 13:53:37.882765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.695 [2024-11-06 13:53:37.882773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.695 [2024-11-06 13:53:37.882781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.695 [2024-11-06 13:53:37.894997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.695 [2024-11-06 13:53:37.895520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.695 [2024-11-06 13:53:37.896116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-11-06 13:53:37.896136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.695 [2024-11-06 13:53:37.896144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.695 [2024-11-06 13:53:37.896364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.695 [2024-11-06 13:53:37.896583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.695 [2024-11-06 13:53:37.896590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.695 [2024-11-06 13:53:37.896602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.695 [2024-11-06 13:53:37.896609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.695 [2024-11-06 13:53:37.909329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.695 [2024-11-06 13:53:37.910009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-11-06 13:53:37.910047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.695 [2024-11-06 13:53:37.910059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.695 [2024-11-06 13:53:37.910298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.695 [2024-11-06 13:53:37.910521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.695 [2024-11-06 13:53:37.910530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.695 [2024-11-06 13:53:37.910538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.695 [2024-11-06 13:53:37.910546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.695 [2024-11-06 13:53:37.923280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.695 [2024-11-06 13:53:37.924041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-11-06 13:53:37.924080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.695 [2024-11-06 13:53:37.924091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.695 [2024-11-06 13:53:37.924329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.695 [2024-11-06 13:53:37.924551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.695 [2024-11-06 13:53:37.924560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.695 [2024-11-06 13:53:37.924567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.695 [2024-11-06 13:53:37.924575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.695 Malloc0 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.695 [2024-11-06 13:53:37.937108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.695 [2024-11-06 13:53:37.937763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-11-06 13:53:37.937801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.695 [2024-11-06 13:53:37.937813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.695 [2024-11-06 13:53:37.938059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.695 [2024-11-06 13:53:37.938282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.695 [2024-11-06 13:53:37.938291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.695 [2024-11-06 13:53:37.938299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.695 [2024-11-06 13:53:37.938308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.695 [2024-11-06 13:53:37.951034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.695 [2024-11-06 13:53:37.951690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-11-06 13:53:37.951728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd33000 with addr=10.0.0.2, port=4420 00:29:14.695 [2024-11-06 13:53:37.951739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd33000 is same with the state(6) to be set 00:29:14.695 [2024-11-06 13:53:37.951985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33000 (9): Bad file descriptor 00:29:14.695 [2024-11-06 13:53:37.952209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.695 [2024-11-06 13:53:37.952218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.695 [2024-11-06 13:53:37.952225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.695 [2024-11-06 13:53:37.952233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.695 [2024-11-06 13:53:37.959552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.695 13:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 824059 00:29:14.695 [2024-11-06 13:53:37.964961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.695 [2024-11-06 13:53:37.988783] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:16.207 4537.43 IOPS, 17.72 MiB/s [2024-11-06T12:53:40.521Z] 5398.75 IOPS, 21.09 MiB/s [2024-11-06T12:53:41.510Z] 6027.89 IOPS, 23.55 MiB/s [2024-11-06T12:53:42.895Z] 6534.20 IOPS, 25.52 MiB/s [2024-11-06T12:53:43.835Z] 6964.09 IOPS, 27.20 MiB/s [2024-11-06T12:53:44.778Z] 7320.25 IOPS, 28.59 MiB/s [2024-11-06T12:53:45.717Z] 7641.54 IOPS, 29.85 MiB/s [2024-11-06T12:53:46.658Z] 7953.21 IOPS, 31.07 MiB/s [2024-11-06T12:53:46.658Z] 8169.33 IOPS, 31.91 MiB/s 00:29:23.282 Latency(us) 00:29:23.282 [2024-11-06T12:53:46.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.282 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:23.282 Verification LBA range: start 0x0 length 0x4000 00:29:23.282 Nvme1n1 : 15.01 8171.01 31.92 9794.87 0.00 7099.21 788.48 14854.83 00:29:23.282 [2024-11-06T12:53:46.658Z] =================================================================================================================== 00:29:23.282 [2024-11-06T12:53:46.658Z] Total : 8171.01 31.92 9794.87 0.00 7099.21 788.48 14854.83 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.282 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.282 rmmod nvme_tcp 00:29:23.282 rmmod nvme_fabrics 00:29:23.282 rmmod nvme_keyring 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 825079 ']' 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 825079 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 825079 ']' 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 825079 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 825079 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 825079' 00:29:23.542 killing process with pid 825079 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 825079 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 825079 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.542 13:53:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.085 13:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.085 00:29:26.085 real 0m28.071s 00:29:26.085 user 1m3.537s 00:29:26.085 sys 0m7.364s 00:29:26.085 13:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:26.085 13:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:26.085 ************************************ 00:29:26.085 END TEST nvmf_bdevperf 00:29:26.085 ************************************ 00:29:26.085 13:53:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:26.085 13:53:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:26.085 13:53:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:26.085 13:53:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.085 ************************************ 00:29:26.085 START TEST nvmf_target_disconnect 00:29:26.085 ************************************ 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:26.085 * Looking for test storage... 00:29:26.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:26.085 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:26.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.086 --rc genhtml_branch_coverage=1 00:29:26.086 --rc genhtml_function_coverage=1 00:29:26.086 --rc genhtml_legend=1 00:29:26.086 --rc geninfo_all_blocks=1 00:29:26.086 --rc geninfo_unexecuted_blocks=1 00:29:26.086 00:29:26.086 ' 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:26.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.086 --rc genhtml_branch_coverage=1 00:29:26.086 --rc genhtml_function_coverage=1 00:29:26.086 --rc genhtml_legend=1 00:29:26.086 --rc geninfo_all_blocks=1 00:29:26.086 --rc geninfo_unexecuted_blocks=1 00:29:26.086 00:29:26.086 ' 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:26.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.086 --rc genhtml_branch_coverage=1 00:29:26.086 --rc genhtml_function_coverage=1 00:29:26.086 --rc genhtml_legend=1 00:29:26.086 --rc geninfo_all_blocks=1 00:29:26.086 --rc geninfo_unexecuted_blocks=1 00:29:26.086 00:29:26.086 ' 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:26.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.086 --rc genhtml_branch_coverage=1 00:29:26.086 --rc genhtml_function_coverage=1 00:29:26.086 --rc genhtml_legend=1 00:29:26.086 --rc geninfo_all_blocks=1 00:29:26.086 --rc geninfo_unexecuted_blocks=1 00:29:26.086 00:29:26.086 ' 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:26.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:26.086 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.087 13:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:34.231 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:34.231 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:34.231 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:34.231 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.231 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:29:34.232 00:29:34.232 --- 10.0.0.2 ping statistics --- 00:29:34.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.232 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:29:34.232 00:29:34.232 --- 10.0.0.1 ping statistics --- 00:29:34.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.232 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:34.232 ************************************ 00:29:34.232 START TEST nvmf_target_disconnect_tc1 00:29:34.232 ************************************ 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.232 [2024-11-06 13:53:56.866277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.232 [2024-11-06 13:53:56.866353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2176ad0 with addr=10.0.0.2, port=4420 00:29:34.232 [2024-11-06 13:53:56.866380] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:34.232 [2024-11-06 13:53:56.866395] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:34.232 [2024-11-06 13:53:56.866403] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:34.232 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:34.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:34.232 Initializing NVMe Controllers 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:34.232 00:29:34.232 real 0m0.126s 00:29:34.232 user 0m0.059s 00:29:34.232 sys 0m0.065s 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.232 ************************************ 00:29:34.232 END TEST nvmf_target_disconnect_tc1 00:29:34.232 ************************************ 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:34.232 ************************************ 00:29:34.232 START TEST nvmf_target_disconnect_tc2 00:29:34.232 ************************************ 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=831151 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 831151 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 831151 ']' 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:34.232 13:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.233 [2024-11-06 13:53:57.032026] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:29:34.233 [2024-11-06 13:53:57.032089] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.233 [2024-11-06 13:53:57.133220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.233 [2024-11-06 13:53:57.185984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.233 [2024-11-06 13:53:57.186036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.233 [2024-11-06 13:53:57.186045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.233 [2024-11-06 13:53:57.186052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.233 [2024-11-06 13:53:57.186059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.233 [2024-11-06 13:53:57.188127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:34.233 [2024-11-06 13:53:57.188288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:34.233 [2024-11-06 13:53:57.188452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:34.233 [2024-11-06 13:53:57.188452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:34.494 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:34.494 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:34.494 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.494 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:34.494 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.755 Malloc0 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.755 [2024-11-06 13:53:57.948665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.755 [2024-11-06 13:53:57.989099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.755 13:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.755 13:53:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.755 13:53:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=831482 00:29:34.755 13:53:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:34.755 13:53:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.666 13:54:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 831151 00:29:36.666 13:54:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Write completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Write completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Write completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Write completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Write completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Write completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Read completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Write completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 Write completed with error (sct=0, sc=8) 00:29:36.666 starting I/O failed 00:29:36.666 [2024-11-06 13:54:00.022599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.666 [2024-11-06 13:54:00.023084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.023120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.023323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.023334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.023703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.023711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.024148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.024176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.024492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.024501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.024981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.025009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.025340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.025349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.025569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.025577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.025992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.026021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.026249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.026258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.026553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.026561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.026901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.026909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.027248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.027256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.027607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.027615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.027847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.027855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.028047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.666 [2024-11-06 13:54:00.028055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.666 qpair failed and we were unable to recover it. 00:29:36.666 [2024-11-06 13:54:00.028370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.028378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.028723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.028731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.029167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.029174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.029489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.029496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.029778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.029789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.030108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.030116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.030453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.030460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.030689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.030696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.031023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.031031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.031320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.031327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.031653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.031660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.031924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.031931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.032247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.032254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.032422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.032429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.032763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.032770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.033054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.033061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.033383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.033390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.033701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.033708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.034015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.034023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.034373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.034381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.034496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.034503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.034667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.034675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.034907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.034916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.035298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.035306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.035644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.035650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.035865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.035872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.036105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.036112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.036337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.036344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.036667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.036674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.036934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.036941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.037162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.037169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.037525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.037532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.037713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.037721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.038021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.038031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.038316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.038324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.038727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.038734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.039092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.039100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.667 [2024-11-06 13:54:00.039409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.667 [2024-11-06 13:54:00.039416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.667 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.039695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.039704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.040048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.040056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.040227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.040235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.040275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.040282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.040551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.040558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.040715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.040723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.041079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.041090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.041425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.041432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.041754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.041762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.042055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.042062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.042345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.042352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.042638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.042646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.042820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.042827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.043117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.043124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.043438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.043445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.043789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.043796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.044003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.044010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.044318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.044325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.044604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.044611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.044913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.044920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.045289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.045296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.045473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.045480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.045853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.045860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.046116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.046123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.942 [2024-11-06 13:54:00.046436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.942 [2024-11-06 13:54:00.046443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.942 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.046776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.046783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.047148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.047155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.047464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.047471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.047684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.047690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.047995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.048003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.048328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.048335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.048673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.048679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.048901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.048909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.049257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.049264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.049424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.049431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.049720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.049727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.050072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.050079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.050265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.050272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.050590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.050597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.050775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.050785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.051198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.051205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.051408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.051415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.051722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.051728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.052035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.052042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.052344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.052351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.052768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.052776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.053204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.053213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.053422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.053428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.053737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.053744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.054096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.054103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.054421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.054428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.054759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.054766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.055073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.055080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.055290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.055297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.055685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.055693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.056010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.056017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.056308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.056316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.056607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.056614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.056949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.056957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.057323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.943 [2024-11-06 13:54:00.057330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.943 qpair failed and we were unable to recover it. 00:29:36.943 [2024-11-06 13:54:00.057537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.057544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.057832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.057839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.058160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.058167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.058377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.058384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.058686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.058693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.058997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.059004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.059250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.059258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.059555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.059562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.059929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.059936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.060233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.060240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.060540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.060546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.060907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.060915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.061212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.061219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.061520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.061527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.061836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.061844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.062158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.062165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.062475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.062482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.062793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.062801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.062910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.062916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.063199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.063206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.063523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.063530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.063843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.063850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.064158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.064165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.064506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.064512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.064815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.064823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.065134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.065140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.065470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.065478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.065774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.065781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.065991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.065998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.066367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.066374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.066725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.066731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.067077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.067085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.067387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.067393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.067714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.067721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.068041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.068048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.068382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.068388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.068698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.068705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.069032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.069039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.944 qpair failed and we were unable to recover it. 00:29:36.944 [2024-11-06 13:54:00.069339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.944 [2024-11-06 13:54:00.069347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.069655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.069663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.069971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.069978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.070225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.070232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.070448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.070456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.070782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.070790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.071018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.071025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.071247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.071254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.071618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.071624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.071979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.071986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.072283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.072295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.072637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.072644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.072972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.072980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.073212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.073219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.073592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.073599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.073933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.073941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.074284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.074291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.074596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.074603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.074712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.074719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.075005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.075012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.075303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.075310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.075639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.075645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.075847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.075854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.076195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.076202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.076509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.076516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.076824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.076831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.077137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.077144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.077449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.077456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.077791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.077800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.078147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.078154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.078441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.078448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.078740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.078756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.079079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.079085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.079386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.079394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.079710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.079716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.080086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.080093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.080393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.080400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.080732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.080739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.945 [2024-11-06 13:54:00.081045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.945 [2024-11-06 13:54:00.081052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.945 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.081377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.081384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.081595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.081602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.081909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.081916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.082238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.082252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.082587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.082593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.082912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.082920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.083249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.083256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.083543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.083549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.083874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.083881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.084220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.084228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.084520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.084527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.084717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.084723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.085059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.085067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.085358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.085365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.085616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.085623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.085934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.085941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.086235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.086242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.086546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.086552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.086867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.086875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.087195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.087202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.087586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.087592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.087770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.087777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.088091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.088098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.088410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.088417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.088732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.088739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.089108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.089115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.089446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.089459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.089768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.089775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.090118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.090125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.090484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.090493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.090794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.090814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.091105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.091113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.946 [2024-11-06 13:54:00.091401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.946 [2024-11-06 13:54:00.091408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.946 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.091718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.091726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.092030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.092036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.092423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.092429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.092794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.092802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.093107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.093114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.093424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.093436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.093734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.093742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.093909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.093916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.094142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.094149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.094487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.094494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.094801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.094809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.095109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.095116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.095434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.095441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.095761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.095769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.096090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.096097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.096417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.096424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.096757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.096765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.097103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.097111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.097418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.097426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.097736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.097743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.098057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.098065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.098381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.098389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.098718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.098726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.099040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.099047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.099357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.099364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.099695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.099703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.099913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.099921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.100211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.100219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.100579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.100800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.100808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.101132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.101140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.101451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.101459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.101789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.101797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.102112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.102120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.102429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.102436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.102664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.102672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.102927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.102936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.947 [2024-11-06 13:54:00.103113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.947 [2024-11-06 13:54:00.103120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.947 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.103339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.103347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.103634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.103641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.103977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.103985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.104280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.104287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.104597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.104604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.104917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.104924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.105250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.105258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.105617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.105625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.105961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.105968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.106282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.106290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.106594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.106601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.106912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.106919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.107231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.107238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.107570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.107578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.107886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.107894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.108182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.108189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.108462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.108469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.108665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.108672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.108982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.108989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.109301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.109308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.109627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.109634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.109942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.109949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.110259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.110266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.110574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.110581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.110901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.110908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.111112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.111119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.111441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.111448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.111777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.111785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.112089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.112095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.112379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.112386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.112716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.112723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.113028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.113041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.113348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.113354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.113671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.113678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.113998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.114005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.114306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.114313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.114643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.114650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.948 [2024-11-06 13:54:00.114970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-11-06 13:54:00.114977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.948 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.115199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.115207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.115510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.115517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.115805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.115812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.116121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.116128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.116422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.116429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.116752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.116758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.116950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.116956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.117175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.117182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.117498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.117505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.117798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.117805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.117997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.118004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.118219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.118225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.118603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.118609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.118940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.118947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.119248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.119254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.119568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.119575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.119887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.119894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.120197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.120212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.120509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.120515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.120804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.120811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.121156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.121163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.121477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.121485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.121790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.121797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.122090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.122097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.122265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.122272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.122540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.122547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.122921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.122929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.123245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.123252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.123563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.123570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.123883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.123890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.124073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.124080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.124387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.124394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.124700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.124707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.124995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.125002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.125335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.125341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.125649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.125656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.125960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-11-06 13:54:00.125967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.949 qpair failed and we were unable to recover it. 00:29:36.949 [2024-11-06 13:54:00.126294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.126300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.126607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.126613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.126831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.126845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.127170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.127178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.127560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.127566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.127871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.127878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.128098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.128105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.128401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.128409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.128734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.128741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.128953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.128960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.129248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.129254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.129577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.129583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.130056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.130063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.130340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.130347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.130734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.130742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.130970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.130978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.131085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.131092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.131190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.131197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.131447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.131454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.131890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.131898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.132217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.132224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.132564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.132570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.132864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.132872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.133209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.133215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.133604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.133611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.133918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.133925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.134247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.134253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.134543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.134556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.134865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.134872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.135213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.135220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.135532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.135539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.135953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.135960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.136276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.136282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.136595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.136602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.136917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.136924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.137261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.137267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.137466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.137473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.137801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.950 [2024-11-06 13:54:00.137807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.950 qpair failed and we were unable to recover it. 00:29:36.950 [2024-11-06 13:54:00.138158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.138164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.138468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.138475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.138791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.138797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.139105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.139111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.139440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.139447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.139745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.139758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.140062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.140068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.140391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.140398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.140574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.140580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.140788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.140795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.140984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.140991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.141327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.141335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.141678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.141685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.141861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.141868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.142183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.142190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.142424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.142431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.142789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.142797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.143010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.143018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.143374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.143382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.143599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.143607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.143954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.143962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.144287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.144295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.144465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.144474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.144799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.144806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.145138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.145146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.145455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.145462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.145647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.145654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.145834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.145843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.146014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.146023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.146306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.146313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.146624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.146632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.146956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.951 [2024-11-06 13:54:00.146963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.951 qpair failed and we were unable to recover it. 00:29:36.951 [2024-11-06 13:54:00.147282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.147289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.147581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.147588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.147911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.147919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.148121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.148128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.148539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.148546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.148860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.148868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.149177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.149184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.149371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.149379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.149704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.149712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.149886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.149893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.150210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.150217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.150534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.150541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.150853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.150860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.151200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.151209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.151381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.151389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.151711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.151719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.152022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.152030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.152307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.152314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.152631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.152638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.152928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.152936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.153339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.153346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.153675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.153683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.153880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.153887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.154247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.154254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.154615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.154623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.154914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.154922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.155250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.155258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.155571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.155578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.155951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.155959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.156266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.156273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.156661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.156669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.156874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.156882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.157229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.157237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.157516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.157523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.157675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.157683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.158058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.158065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.952 [2024-11-06 13:54:00.158372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.952 [2024-11-06 13:54:00.158379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.952 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.158754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.158762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.159057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.159064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.159264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.159271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.159503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.159510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.159863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.159870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.160117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.160123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.160443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.160450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.160750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.160757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.161113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.161120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.161487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.161495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.161798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.161805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.162111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.162118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.162403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.162410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.162696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.162703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.162990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.162997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.163158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.163165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.163547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.163555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.163888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.163896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.164257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.164263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.164591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.164598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.164916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.164923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.165223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.165229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.165434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.165440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.165607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.165614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.165905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.165912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.166226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.166232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.166565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.166572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.166776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.166784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.167098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.167104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.167298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.167305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.167636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.167642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.167865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.167872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.168240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.168247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.168537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.168543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.168844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.168851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.169189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.169197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.169491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.169497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.169811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.953 [2024-11-06 13:54:00.169819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.953 qpair failed and we were unable to recover it. 00:29:36.953 [2024-11-06 13:54:00.170128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.170135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.170431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.170438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.170514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.170520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.170821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.170828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.171140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.171147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.171393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.171400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.171570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.171577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.171846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.171854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.172195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.172201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.172533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.172540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.172834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.172841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.173119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.173125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.173326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.173332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.173539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.173545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.173785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.173792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.173969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.173976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.174165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.174171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.174344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.174351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.174631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.174638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.174991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.174998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.175325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.175332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.175653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.175661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.175945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.175953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.176269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.176277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.176328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.176335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.176645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.176653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.176958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.176966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.177271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.177279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.177555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.177563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.177736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.177744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.178080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.178088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.178400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.178408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.178698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.178706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.179023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.179031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.179336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.179343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.179657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.179665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.179955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.179963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.180269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.954 [2024-11-06 13:54:00.180276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.954 qpair failed and we were unable to recover it. 00:29:36.954 [2024-11-06 13:54:00.180562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.180570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.180870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.180878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.181168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.181175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.181483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.181491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.181785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.181792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.182099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.182105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.182423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.182437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.182778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.182787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.183010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.183017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.183362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.183369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.183589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.183596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.183815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.183822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.184153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.184160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.184426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.184433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.184774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.184781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.185094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.185102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.185399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.185406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.185697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.185704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.186001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.186008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.186368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.186375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.186690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.186697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.187009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.187017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.187308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.187316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.187635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.187642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.187959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.187967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.188272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.188278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.188600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.188607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.188921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.188928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.189186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.189193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.189525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.189532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.189732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.189739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.190072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.190079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.190394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.190400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.190717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.190725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.191080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.191088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.191378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.191385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.191716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.191724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.192036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.955 [2024-11-06 13:54:00.192043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.955 qpair failed and we were unable to recover it. 00:29:36.955 [2024-11-06 13:54:00.192353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.192360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.192678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.192686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.192912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.192920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.193237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.193245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.193532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.193539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.193828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.193835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.194159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.194166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.194475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.194482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.194796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.194810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.195128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.195137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.195446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.195453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.195880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.195887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.196200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.196207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.196399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.196406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.196613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.196620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.196975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.196981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.197181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.197188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.197545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.197552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.197903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.197911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.198216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.198224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.198523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.198530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.198717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.198724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.199056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.199063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.199451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.199457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.199841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.199848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.200173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.200180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.200392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.200407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.200709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.200716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.201086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.201094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.201423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.201430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.201743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.201754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.202062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.202069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.202398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.956 [2024-11-06 13:54:00.202405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.956 qpair failed and we were unable to recover it. 00:29:36.956 [2024-11-06 13:54:00.202605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.202611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.202892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.202900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.203212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.203218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.203585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.203592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.203759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.203767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.204125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.204132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.204480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.204487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.204766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.204773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.205082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.205088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.205407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.205414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.205604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.205612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.205929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.205936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.206170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.206177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.206552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.206559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.206858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.206866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.207182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.207188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.207384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.207392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.207617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.207624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.207931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.207938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.208153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.208159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.208473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.208479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.208801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.208808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.209177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.209184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.209485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.209492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.209812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.209819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.210234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.210242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.210567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.210574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.210897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.210904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.211234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.211242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.211549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.211555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.211861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.211868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.212197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.212204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.212515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.212522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.212827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.212834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.213215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.213222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.213480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.213486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.213796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.957 [2024-11-06 13:54:00.213803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.957 qpair failed and we were unable to recover it. 00:29:36.957 [2024-11-06 13:54:00.214157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.214163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.214452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.214459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.214639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.214646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.214964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.214971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.215175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.215188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.215545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.215551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.215866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.215873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.216208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.216215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.216531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.216537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.216864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.216871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.217182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.217189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.217464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.217471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.217650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.217656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.218034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.218041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.218325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.218332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.218521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.218528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.218844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.218851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.219020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.219026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.219308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.219315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.219485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.219494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.219793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.219800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.220107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.220113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.220413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.220419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.220726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.220733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.221041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.221048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.221347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.221353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.221544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.221550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.221859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.221866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.222198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.222205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.222369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.222376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.222676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.222683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.223117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.223125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.223307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.223314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.223606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.223613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.223940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.223947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.224145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.224152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.224451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.224458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.224789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.958 [2024-11-06 13:54:00.224796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.958 qpair failed and we were unable to recover it. 00:29:36.958 [2024-11-06 13:54:00.225081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.225088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.225412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.225418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.225616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.225623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.225855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.225862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.226167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.226174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.226370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.226377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.226673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.226679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.226730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.226736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.226883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.226891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.227144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.227150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.227425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.227433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.227716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.227722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.227911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.227918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.228118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.228124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.228424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.228431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.228745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.228756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.228955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.228962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.229285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.229292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.229603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.229610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.229844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.229851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.230144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.230151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.230511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.230519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.230871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.230878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.231202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.231209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.231596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.231602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.231917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.231924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.232153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.232160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.232327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.232334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.232538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.232544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.232906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.232913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.233293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.233300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.233593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.233600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.233916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.233923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.234092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.234099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.234329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.234335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.234640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.234647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.234861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.234868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.959 qpair failed and we were unable to recover it. 00:29:36.959 [2024-11-06 13:54:00.235035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.959 [2024-11-06 13:54:00.235041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.235233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.235240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.235425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.235432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.235726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.235732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.236043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.236049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.236451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.236458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.236723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.236730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.237116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.237123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.237429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.237435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.237751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.237758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.238054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.238061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.238368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.238374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.238561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.238568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.238923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.238930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.239302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.239310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.239624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.239631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.239834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.239841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.240178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.240184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.240490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.240497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.240809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.240816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.241010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.241018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.241417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.241424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.241800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.241808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.242115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.242122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.242440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.242449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.242755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.242762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.242970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.242977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.243305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.243312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.243622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.243629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.243751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.243758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.243961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.243969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.244283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.244290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.244522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.960 [2024-11-06 13:54:00.244529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.960 qpair failed and we were unable to recover it. 00:29:36.960 [2024-11-06 13:54:00.244826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.244833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.245043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.245051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.245342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.245350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.245694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.245701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.245992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.245999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.246308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.246315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.246490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.246497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.246786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.246793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.247124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.247131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.247412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.247419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.247739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.247751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.247947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.247954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.248289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.248295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.248622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.248629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.248939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.248946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.249244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.249251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.249561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.249568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.249967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.249974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.250264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.250271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.250582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.250590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.250909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.250916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.251229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.251237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.251550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.251557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.251886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.251894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.252186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.252193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.252519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.252526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.252835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.252842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.253123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.253130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.253449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.253455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.253764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.253771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.254097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.254103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.254416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.254424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.254737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.254743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.254949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.254956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.255276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.255282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.255604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.255611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.255903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.255910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.961 qpair failed and we were unable to recover it. 00:29:36.961 [2024-11-06 13:54:00.256199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.961 [2024-11-06 13:54:00.256206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.256400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.256407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.256726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.256733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.257031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.257038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.257406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.257414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.257706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.257713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.258019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.258026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.258219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.258226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.258499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.258506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.258811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.258818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.259145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.259152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.259495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.259502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.259817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.259824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.260036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.260043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.260222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.260229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.260490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.260496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.260804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.260811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.261133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.261140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.261471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.261478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.261789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.261797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.262124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.262131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.262460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.262467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.262777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.262784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.263090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.263097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.263381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.263388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.263698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.263705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.264006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.264013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.264318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.264325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.264634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.264641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.264979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.264986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.265292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.265299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.265642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.265649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.265944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.265951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.266272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.266279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.266594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.266603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.266922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.266929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.267240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.267248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.962 qpair failed and we were unable to recover it. 00:29:36.962 [2024-11-06 13:54:00.267528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.962 [2024-11-06 13:54:00.267536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.267811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.267819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.268138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.268145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.268477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.268485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.268783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.268790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.269050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.269056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.269377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.269384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.269666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.269673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.269946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.269953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.270251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.270258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.270441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.270448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.270740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.270750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.271091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.271098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.271413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.271420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.271751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.271757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.272043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.272050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.272377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.272384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.272705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.272712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.273013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.273019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.273321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.273328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.273553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.273560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.273882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.273889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.274197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.274205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.274514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.274520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.274814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.274821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.275042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.275050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.275360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.275367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.275703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.275710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.276013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.276021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.276322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.276328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.276652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.276659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.276955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.276962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.277277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.277284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.277597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.277604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.277911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.277918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.278192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.278198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.278329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.278336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.278631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.963 [2024-11-06 13:54:00.278639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.963 qpair failed and we were unable to recover it. 00:29:36.963 [2024-11-06 13:54:00.278955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.278962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.279179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.279186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.279466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.279473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.279812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.279820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.280146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.280153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.280360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.280367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.280701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.280708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.281025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.281032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.281351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.281358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.281548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.281555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.281897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.281904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.282260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.282267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.282576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.282583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.282882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.282889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.283184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.283190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.283484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.283491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.283791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.283798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.284144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.284152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.284449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.284455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.284789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.284797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.285119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.285126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.285418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.285425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.285614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.285621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.285885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.285893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.286199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.286206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.286528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.286534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.286811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.286819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.287029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.287036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.287398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.287404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.287723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.287729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.288031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.288038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.288374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.288382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.288594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.288602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.288895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.288901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.289218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.289224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.289511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.289518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.289811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.289818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.964 qpair failed and we were unable to recover it. 00:29:36.964 [2024-11-06 13:54:00.290194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.964 [2024-11-06 13:54:00.290200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.290513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.290519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.290861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.290869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.291183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.291190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.291495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.291501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.291825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.291832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.292138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.292144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.292454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.292460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.292773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.292781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.293104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.293110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.293399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.293406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.293705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.293712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.294021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.294029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.294337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.294344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.294635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.294642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.294938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.294945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.295244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.295251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.295582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.295589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.295898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.295905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.296127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.296134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.296431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.296438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.296765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.296772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.297075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.297082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.297382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.297389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.297682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.297696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.297996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.298003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.298349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.298356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.298675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.298682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.299032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.299040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.299353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.299360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.299566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.299573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.299956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.299963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.300253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.300260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.300460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.965 [2024-11-06 13:54:00.300466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.965 qpair failed and we were unable to recover it. 00:29:36.965 [2024-11-06 13:54:00.300782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.966 [2024-11-06 13:54:00.300789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.966 qpair failed and we were unable to recover it. 00:29:36.966 [2024-11-06 13:54:00.301119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.966 [2024-11-06 13:54:00.301125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.966 qpair failed and we were unable to recover it. 00:29:36.966 [2024-11-06 13:54:00.301408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.966 [2024-11-06 13:54:00.301414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.966 qpair failed and we were unable to recover it. 00:29:36.966 [2024-11-06 13:54:00.301629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.966 [2024-11-06 13:54:00.301636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.966 qpair failed and we were unable to recover it. 00:29:36.966 [2024-11-06 13:54:00.301951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.966 [2024-11-06 13:54:00.301958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.966 qpair failed and we were unable to recover it. 00:29:36.966 [2024-11-06 13:54:00.302294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.966 [2024-11-06 13:54:00.302301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:36.966 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.302612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.302621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.302929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.302936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.303268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.303277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.303591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.303598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.303910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.303917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.304086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.304094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.304400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.304407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.304780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.304787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.305125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.305132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.305336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.305342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.305558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.305565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.305918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.305925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.306241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.306254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.306569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.306575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.306970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.306977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.307163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.307170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.307359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.307365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.307573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.307580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.307932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.307939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.308261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.308268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.308578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.308585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.308784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.308791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.309174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.309181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.309374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.309381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.309625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.309632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.309928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.309935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.310108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.310116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.310503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.310511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.310704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.310712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.311004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.311011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.242 [2024-11-06 13:54:00.311322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.242 [2024-11-06 13:54:00.311329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.242 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.311640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.311648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.311934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.311942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.312130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.312138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.312409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.312417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.312711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.312719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.313000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.313008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.313220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.313227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.313408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.313416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.313709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.313717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.314053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.314061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.314369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.314377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.314685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.314695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.314873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.314881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.315149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.315157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.315438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.315446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.315651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.315659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.315969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.315976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.316180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.316187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.316392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.316399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.316688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.316697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.316882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.316891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.317091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.317099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.317404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.317412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.317699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.317706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.317888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.317896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.318223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.318230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.318406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.318413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.318696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.318704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.319065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.319072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.319335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.319342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.319668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.319676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.320002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.320010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.320222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.320229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.320541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.320549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.320873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.320880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.321165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.321171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.321552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.321558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.321772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.243 [2024-11-06 13:54:00.321779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.243 qpair failed and we were unable to recover it. 00:29:37.243 [2024-11-06 13:54:00.321830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.321837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.322135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.322141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.322295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.322302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.322536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.322542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.322831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.322838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.323172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.323179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.323490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.323496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.323783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.323790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.323900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.323907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.324169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.324177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.324330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.324337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.324515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.324522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.324780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.324788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.325099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.325107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.325437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.325444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.325757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.325765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.326117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.326123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.326454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.326460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.326759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.326766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.326971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.326977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.327308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.327315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.327619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.327626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.327864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.327871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.328259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.328265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.328590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.328597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.328909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.328916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.329230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.329237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.329540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.329547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.329877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.329884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.330055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.330063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.330349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.330356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.330649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.330656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.330944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.330951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.331268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.331274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.331583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.331590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.331897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.331905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.332260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.332267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.332576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.244 [2024-11-06 13:54:00.332584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.244 qpair failed and we were unable to recover it. 00:29:37.244 [2024-11-06 13:54:00.332887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.332894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.333194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.333201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.333506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.333514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.333804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.333811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.334145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.334152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.334435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.334442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.334735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.334742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.334949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.334955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.335274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.335281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.335598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.335605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.335898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.335905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.336225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.336232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.336525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.336532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.336837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.336845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.337154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.337160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.337465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.337472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.337802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.337809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.338115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.338122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.338428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.338435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.338744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.338755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.339032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.339038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.339352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.339359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.339666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.339672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.339984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.339991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.340299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.340306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.340691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.340698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.341011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.341018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.341318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.341326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.341647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.341655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.341963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.341971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.342279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.342287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.342635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.342642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.342959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.342972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.343303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.343310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.343617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.343624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.343933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.343940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.344226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.344233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.344551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.344558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.245 qpair failed and we were unable to recover it. 00:29:37.245 [2024-11-06 13:54:00.344857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.245 [2024-11-06 13:54:00.344864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.345164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.345172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.345460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.345468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.345773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.345781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.346097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.346105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.346416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.346422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.346539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.346546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.346835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.346841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.347165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.347172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.347477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.347484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.347777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.347785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.348100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.348106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.348418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.348425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.348722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.348729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.348940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.348947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.349276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.349283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.349566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.349573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.349874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.349881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.350190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.350197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.350504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.350511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.350800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.350807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.351030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.351037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.351360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.351367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.351714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.351722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.352095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.352103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.352410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.352416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.352708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.352715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.353004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.353011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.353334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.353341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.353538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.353545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.353875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.353882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.354181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.354188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.354498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.354504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.246 qpair failed and we were unable to recover it. 00:29:37.246 [2024-11-06 13:54:00.354797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.246 [2024-11-06 13:54:00.354804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.355127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.355134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.355461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.355468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.355774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.355781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.356067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.356074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.356406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.356413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.356719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.356726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.357037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.357044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.357351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.357357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.357653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.357660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.357969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.357976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.358286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.358294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.358602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.358609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.358908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.358915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.359243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.359251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.359536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.359543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.359852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.359867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.360150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.360157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.360472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.360479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.360786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.360793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.361110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.361117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.361439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.361446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.361734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.361756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.362075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.362082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.362277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.362284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.362473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.362480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.362790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.362798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.363130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.363137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.363451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.363457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.363823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.363830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.364154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.364161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.364349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.364355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.364548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.364555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.364856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.364863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.365195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.365201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.365512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.365519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.365808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.365815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.366144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.366152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.247 [2024-11-06 13:54:00.366454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.247 [2024-11-06 13:54:00.366460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.247 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.366798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.366805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.367115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.367121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.367411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.367418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.367732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.367739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.368049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.368056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.368372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.368379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.368686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.368693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.369018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.369025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.369334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.369342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.369649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.369655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.369956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.369963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.370270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.370276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.370589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.370599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.370906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.370913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.371291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.371298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.371607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.371614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.371914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.371921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.372218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.372225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.372518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.372524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.372837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.372844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.373167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.373174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.373487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.373494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.373826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.373833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.374184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.374190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.374499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.374506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.374815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.374822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.375117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.375124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.375440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.375447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.375766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.375774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.376060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.376067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.376362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.376369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.376665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.376671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.377015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.377021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.377335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.377342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.377672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.377679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.377986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.377993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.378318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.378325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.378639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.248 [2024-11-06 13:54:00.378645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.248 qpair failed and we were unable to recover it. 00:29:37.248 [2024-11-06 13:54:00.378926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.378933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.379233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.379239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.379558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.379565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.379894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.379902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.380202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.380209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.380528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.380535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.380838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.380845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.381139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.381147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.381460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.381466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.381668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.381674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.382023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.382030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.382338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.382345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.382672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.382678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.383031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.383038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.383248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.383256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.383533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.383540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.383839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.383846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.384167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.384174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.384479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.384486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.384795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.384802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.385130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.385136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.385445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.385452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.385766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.385774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.386055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.386062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.386359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.386365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.386691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.386698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.386996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.387003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.387288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.387295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.387601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.387608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.387917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.387925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.388254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.388261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.388573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.388579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.388758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.388764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.389065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.389072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.389383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.389389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.389582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.389589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.389795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.389802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.390075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.249 [2024-11-06 13:54:00.390081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.249 qpair failed and we were unable to recover it. 00:29:37.249 [2024-11-06 13:54:00.390403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.390409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.390750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.390757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.391057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.391063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.391384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.391391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.391715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.391722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.392019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.392027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.392316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.392323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.392628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.392635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.393012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.393020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.393324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.393331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.393642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.393649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.393920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.393927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.394129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.394136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.394446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.394453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.394736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.394743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.395052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.395059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.395326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.395335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.395643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.395650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.395968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.395976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.396293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.396300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.396604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.396611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.396891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.396898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.397226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.397233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.397544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.397552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.397764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.397771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.398091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.398097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.398387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.398394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.398716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.398723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.399066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.399073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.399385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.399391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.399775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.399783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.400100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.400107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.400392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.400399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.400710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.400717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.401029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.401036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.401352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.401358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.401657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.401664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.401963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.401969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.250 [2024-11-06 13:54:00.402260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.250 [2024-11-06 13:54:00.402266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.250 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.402551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.402558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.402880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.402887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.403203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.403210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.403543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.403550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.403856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.403863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.404186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.404192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.404502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.404509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.404827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.404835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.405154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.405161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.405474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.405480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.405786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.405793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.406129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.406136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.406438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.406444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.406818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.406826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.407140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.407146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.407438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.407445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.407764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.407771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.408075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.408084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.408405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.408411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.408715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.408721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.409001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.409008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.409216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.409223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.409418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.409424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.409729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.409736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.410028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.410035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.410354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.410361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.410693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.410700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.410915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.410923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.411242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.411249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.411556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.411562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.411873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.411881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.412186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.251 [2024-11-06 13:54:00.412199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.251 qpair failed and we were unable to recover it. 00:29:37.251 [2024-11-06 13:54:00.412503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.412509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.412799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.412807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.413013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.413019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.413217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.413223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.413548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.413555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.413762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.413769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.414047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.414053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.414357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.414364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.414679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.414686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.414990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.414996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.415182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.415189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.415490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.415496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.415795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.415802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.416090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.416097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.416310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.416317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.416589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.416596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.416905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.416912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.417117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.417123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.417471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.417477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.417810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.417818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.418142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.418148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.418434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.418449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.418763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.418770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.419096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.419102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.419407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.419414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.419721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.419730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.420015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.420023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.420318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.420325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.420421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.420427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.420611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.420618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.420915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.420923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.421103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.421111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.421296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.421303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.421575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.421582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.421805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.421812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.422084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.422091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.422400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.422407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.422726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.422733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-11-06 13:54:00.423053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.252 [2024-11-06 13:54:00.423060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.423413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.423420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.423626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.423633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.424006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.424013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.424198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.424205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.424549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.424555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.424860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.424867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.425183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.425189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.425519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.425526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.425846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.425853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.426178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.426185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.426494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.426502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.426748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.426756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.426941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.426949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.427168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.427176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.427479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.427486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.427817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.427824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.428129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.428136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.428460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.428467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.428774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.428781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.429100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.429107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.429416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.429423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.429725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.429732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.429904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.429911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.430159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.430166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.430479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.430486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.430778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.430785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.431089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.431099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.431313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.431320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.431502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.431510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.431801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.431808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.432148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.432154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.432485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.432491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.432817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.432824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.433049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.433055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.433340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.433347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.433670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.433676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.433989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.433996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.434294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.253 [2024-11-06 13:54:00.434300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-11-06 13:54:00.434517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.434523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.434832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.434839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.435020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.435027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.435363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.435370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.435687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.435693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.435974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.435981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.436303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.436309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.436638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.436646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.436836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.436843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.437166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.437173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.437491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.437499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.437794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.437800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.438118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.438124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.438433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.438439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.438755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.438763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.439077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.439084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.439393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.439400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.439708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.439714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.440001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.440008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.440314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.440321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.440630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.440637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.440948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.440955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.441277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.441284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.441585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.441592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.441876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.441884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.442066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.442072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.442301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.442308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.442614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.442620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.442942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.442951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.443152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.443159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.443433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.443440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.443767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.443774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.444071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.444078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.444393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.444400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.444702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.444709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.445034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.445042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.445351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.445358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.445639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.254 [2024-11-06 13:54:00.445646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-11-06 13:54:00.445931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.445938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.446232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.446240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.446569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.446575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.446769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.446776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.447089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.447096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.447399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.447406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.447719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.447725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.448090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.448097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.448400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.448407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.448695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.448701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.449003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.449016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.449327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.449334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.449640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.449647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.449922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.449929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.450227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.450242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.450545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.450552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.450793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.450800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.451123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.451130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.451442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.451449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.451777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.451785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.452116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.452124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.452430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.452437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.452750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.452757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.453055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.453061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.453369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.453376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.453697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.453703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.454014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.454021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.454337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.454343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.454636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.454643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.454956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.454963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.455260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.455269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.455592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.455599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.455996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.456003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.456299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.456305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.255 [2024-11-06 13:54:00.456612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.255 [2024-11-06 13:54:00.456619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.255 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.456911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.456918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.457237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.457244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.457533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.457541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.457850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.457857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.458151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.458158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.458477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.458483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.458779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.458786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.459078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.459084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.459308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.459315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.459592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.459599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.459917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.459924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.460251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.460257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.460564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.460571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.460878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.460885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.461253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.461261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.461570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.461577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.461891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.461899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.462215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.462221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.462413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.462420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.462689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.462696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.463014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.463021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.463331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.463338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.463632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.463639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.463931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.463938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.464225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.464239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.464538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.464545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.464741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.464763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.465100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.465107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.465418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.465425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.465735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.465741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.466056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.466063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.466250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.466257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.466462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.466470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.466790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.466798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.467090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.467097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.467402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.467410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.467714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.467721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.468008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.256 [2024-11-06 13:54:00.468015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.256 qpair failed and we were unable to recover it. 00:29:37.256 [2024-11-06 13:54:00.468214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.468221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.468503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.468510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.468809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.468817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.469137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.469144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.469389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.469396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.469708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.469716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.469925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.469932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.470157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.470164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.470318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.470326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.470612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.470619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.470903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.470911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.471243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.471249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.471460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.471467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.471737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.471750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.472074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.472081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.472389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.472396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.472574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.472581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.472793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.472800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.473150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.473157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.473466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.473473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.473783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.473791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.474091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.474098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.474439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.474445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.474654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.474661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.474981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.474989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.475278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.475292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.475606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.475612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.475795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.475803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.475889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.475896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.476219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.476227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.476387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.476396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.476696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.476702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.476910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.476917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.477218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.477225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.477543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.477550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.477735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.477742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.478022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.478029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.478314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.257 [2024-11-06 13:54:00.478328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.257 qpair failed and we were unable to recover it. 00:29:37.257 [2024-11-06 13:54:00.478661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.478668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.479042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.479049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.479355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.479362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.479676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.479684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.479985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.479992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.480300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.480307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.480656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.480664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.480963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.480971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.481372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.481379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.481705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.481713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.482015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.482023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.482335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.482343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.482714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.482722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.483116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.483124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.483408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.483415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.483725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.483732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.484059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.484067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.484372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.484380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.484540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.484548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.484758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.484767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.485067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.485073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.485373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.485381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.485675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.485682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.485863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.485871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.486147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.486154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.486454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.486461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.486754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.486763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.487085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.487092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.487398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.487404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.487711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.487718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.488086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.488093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.488403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.488410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.488718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.488725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.489024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.489032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.489322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.489329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.489608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.489616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.489953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.489959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.490249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.490256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.258 qpair failed and we were unable to recover it. 00:29:37.258 [2024-11-06 13:54:00.490444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.258 [2024-11-06 13:54:00.490450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.490628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.490634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.490840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.490847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.491148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.491154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.491485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.491491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.491799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.491806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.492002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.492009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.492344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.492351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.492638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.492645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.492977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.492984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.493340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.493348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.493653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.493660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.493968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.493975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.494219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.494226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.494518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.494525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.494844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.494851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.495169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.495177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.495484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.495491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.495799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.495806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.495967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.495975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.496291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.496298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.496614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.496621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.496959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.496966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.497290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.497297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.497599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.497606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.497928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.497935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.498240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.498247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.498571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.498578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.498777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.498786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.499111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.499118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.499431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.499438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.499650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.499657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.499970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.499977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.500271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.500278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.500587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.500593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.500905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.500912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.501121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.501127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.501435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.501442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.259 [2024-11-06 13:54:00.501769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.259 [2024-11-06 13:54:00.501776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.259 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.502062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.502070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.502372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.502379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.502661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.502668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.502983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.502990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.503275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.503282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.503580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.503586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.503791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.503798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.504008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.504014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.504355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.504361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.504661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.504668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.504977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.504985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.505291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.505299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.505585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.505592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.505877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.505885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.506195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.506203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.506510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.506518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.506814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.506826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.507160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.507167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.507480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.507487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.507771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.507779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.508151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.508158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.508452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.508459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.508773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.508780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.509132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.509139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.509445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.509452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.509755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.509762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.509971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.509978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.510250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.510257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.510578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.510584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.510949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.510959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.511175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.511183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.511451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.511458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.511782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.511789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.512092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.260 [2024-11-06 13:54:00.512107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.260 qpair failed and we were unable to recover it. 00:29:37.260 [2024-11-06 13:54:00.512407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.512414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.512720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.512728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.513046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.513053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.513366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.513373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.513694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.513701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.514007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.514016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.514334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.514343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.514629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.514637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.514920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.514929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.515244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.515251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.515558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.515564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.515861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.515868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.516155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.516162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.516486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.516493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.516797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.516804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.517106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.517120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.517427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.517434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.517723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.517730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.518040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.518047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.518354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.518361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.518673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.518680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.518994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.519009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.519292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.519300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.519403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.519411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.519683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.519690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.520029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.520037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.520335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.520343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.520630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.520636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.520941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.520948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.521259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.521265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.521544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.521551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.521860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.521867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.522155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.522162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.522488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.522495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.522776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.522783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.523125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.523134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.523345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.523352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.523727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.523735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.261 qpair failed and we were unable to recover it. 00:29:37.261 [2024-11-06 13:54:00.524052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.261 [2024-11-06 13:54:00.524060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.524266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.524274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.524559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.524567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.524901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.524908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.525112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.525118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.525378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.525386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.525597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.525604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.525919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.525926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.526251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.526257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.526541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.526548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.526808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.526815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.527152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.527159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.527490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.527497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.527775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.527782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.528004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.528012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.528355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.528362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.528679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.528686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.528998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.529005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.529355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.529362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.529562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.529569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.529851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.529859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.530214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.530221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.530410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.530417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.530736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.530744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.531039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.531046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.531347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.531354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.531670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.531677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.531981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.531988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.532338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.532345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.532662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.532669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.532973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.532980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.533346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.533354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.533459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.533466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.533766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.533774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.534091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.534098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.534414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.534420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.534666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.534673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.534977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.534986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.262 [2024-11-06 13:54:00.535163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.262 [2024-11-06 13:54:00.535172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.262 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.535469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.535476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.535783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.535790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.536126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.536133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.536507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.536514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.536918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.536926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.537105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.537113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.537386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.537392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.537717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.537724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.538107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.538114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.538411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.538417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.538671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.538678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.538982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.538989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.539296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.539304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.539590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.539596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.539922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.539929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.540131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.540138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.540441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.540448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.540775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.540782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.540938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.540945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.541307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.541315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.541608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.541617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.541919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.541926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.542112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.542120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.542323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.542330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.542515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.542523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.542809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.542817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.543143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.543151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.543480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.543488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.543799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.543806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.544152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.544160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.544462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.544469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.544662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.544671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.544985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.544993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.545308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.545314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.545623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.545637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.545833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.545841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.546159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.546168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.546458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.263 [2024-11-06 13:54:00.546466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.263 qpair failed and we were unable to recover it. 00:29:37.263 [2024-11-06 13:54:00.546672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.546680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.546991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.546998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.547369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.547376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.547689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.547696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.547870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.547878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.548219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.548226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.548532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.548547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.548734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.548741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.549045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.549052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.549367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.549374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.549549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.549557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.549916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.549923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.550124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.550131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.550508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.550516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.550791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.550799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.551139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.551147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.551390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.551397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.551592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.551600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.551912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.551918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.552224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.552237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.552497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.552504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.552825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.552832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.553148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.553155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.553345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.553352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.553528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.553535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.553818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.553826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.554169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.554176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.554377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.554384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.554643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.554650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.554980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.554987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.555257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.555264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.555560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.555566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.555942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.555950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.556264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.556271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.264 qpair failed and we were unable to recover it. 00:29:37.264 [2024-11-06 13:54:00.556587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.264 [2024-11-06 13:54:00.556594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.556924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.556933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.557244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.557251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.557457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.557465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.557659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.557666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.557840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.557848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.558144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.558153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.558498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.558506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.558778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.558785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.559094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.559102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.559829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.559849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.560074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.560082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.560238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.560246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.560537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.560544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.560810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.560818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.561139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.561146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.561416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.561423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.561725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.561733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.562099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.562107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.562306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.562312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.562619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.562626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.562995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.563003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.563283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.563290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.563604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.563611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.563816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.563823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.564201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.564208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.564404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.564412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.564628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.564635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.564921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.564929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.565237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.565254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.565577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.565584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.565942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.565950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.566291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.566298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.566632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.566640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.566963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.566970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.567153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.567161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.567543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.567550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.567812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.567821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.265 [2024-11-06 13:54:00.568013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.265 [2024-11-06 13:54:00.568020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.265 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.568348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.568355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.568672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.568680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.569049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.569057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.569343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.569351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.569578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.569586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.569927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.569935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.570236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.570243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.570408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.570417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.570727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.570734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.571033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.571040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.571353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.571362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.571719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.571726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.572105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.572112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.572328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.572335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.572607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.572614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.572805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.572813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.573010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.573017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.573303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.573310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.573641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.573648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.574009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.574017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.574334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.574340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.574530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.574537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.574593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.574600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.574907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.574921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.575218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.575226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.575416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.575423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.575694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.575701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.576018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.576025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.576377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.576384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.576700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.576707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.577042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.577049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.577353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.577360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.577715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.577722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.578149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.578156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.578473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.578480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.578793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.578800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.579012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.579019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.579385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.266 [2024-11-06 13:54:00.579392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.266 qpair failed and we were unable to recover it. 00:29:37.266 [2024-11-06 13:54:00.579704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.579711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.580091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.580099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.580463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.580470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.580695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.580702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.581154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.581163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.581335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.581342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.581523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.581531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.581804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.581812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.582089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.582096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.582405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.582413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.582599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.582606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.582815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.582822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.583113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.583119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.583421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.583428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.583736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.583743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.584053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.584060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.584396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.584404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.584596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.584604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.584668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.584675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.584869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.584876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.585251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.585259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.585590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.585598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.585813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.585820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.586196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.586203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.586473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.586480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.586799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.586807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.587129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.587136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.587466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.587473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.587661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.587667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.587969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.587976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.588309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.588316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.588523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.588529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.588866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.588873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.589133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.589140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.589526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.589532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.589834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.589841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.590176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.590183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.590482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.267 [2024-11-06 13:54:00.590489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.267 qpair failed and we were unable to recover it. 00:29:37.267 [2024-11-06 13:54:00.590821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.590828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.591154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.591161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.591491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.591497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.591880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.591887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.592186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.592193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.592491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.592498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.592793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.592801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.593122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.593129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.593452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.593459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.593780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.593787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.594118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.594125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.594446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.594458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.594820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.594827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.595144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.595152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.595430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.595436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.595754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.595761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.596065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.596072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.596399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.596406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.596703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.596709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.596929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.596936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.597253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.597261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.597546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.597553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.597903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.597910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.598212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.598220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.598570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.598578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.598911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.598918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.599240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.599247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.599563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.599569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.599782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.599789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.600055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.600062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.600377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.600383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.600678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.600685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.601018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.601024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.268 [2024-11-06 13:54:00.601319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.268 [2024-11-06 13:54:00.601327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.268 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.601633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.601641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.601938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.601947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.602290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.602297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.602617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.602625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.603247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.603263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.603548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.603556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.603864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.603872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.604189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.604196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.604370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.604378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.604690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.604697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.605017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.605026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.605344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.605351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.605605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.605613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.605907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.605915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.606268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.606275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.606587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.606594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.606882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.606888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.607203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.607212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.607542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.607549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.607864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.607871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.608200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.608207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.608529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.608536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.608896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.608903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.609162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.609169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.609499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.609506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.609809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.609817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.610136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.610143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.610473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.610479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.610667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.610673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.611018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.611027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.611357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.611365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.611670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.611677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.611971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.611979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.612252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.612260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.612591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.612599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.545 [2024-11-06 13:54:00.612763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.545 [2024-11-06 13:54:00.612771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.545 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.613169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.613177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.613496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.613503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.613666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.613674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.614038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.614046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.614359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.614366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.614582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.614588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.614878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.614885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.615100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.615108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.615393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.615399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.615612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.615621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.615948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.615956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.616271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.616278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.616627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.616634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.616960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.616967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.617302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.617309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.617506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.617514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.617816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.617823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.618116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.618122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.618439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.618445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.618736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.618743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.619055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.619062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.619266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.619274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.619593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.619599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.619916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.619923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.620257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.620264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.620645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.620651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.620944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.620952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.621271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.621279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.621572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.621580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.621738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.621748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.622045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.622053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.622352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.622359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.622564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.622572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.622773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.622788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.622934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.622940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.623239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.546 [2024-11-06 13:54:00.623246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.546 qpair failed and we were unable to recover it. 00:29:37.546 [2024-11-06 13:54:00.623582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.623589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.623959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.623968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.624157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.624165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.624457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.624464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.624625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.624633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.624914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.624922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.625242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.625249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.625529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.625536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.625835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.625842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.626186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.626193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.626509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.626515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.626884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.626891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.627239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.627246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.627548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.627555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.627900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.627908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.628209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.628216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.628434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.628441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.628729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.628736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.629040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.629048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.629364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.629371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.629668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.629675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.629961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.629968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.630299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.630306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.630588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.630595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.630894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.630901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.631135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.631141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.631445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.631453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.631763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.631771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.632075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.632082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.632382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.632389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.632680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.632687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.632983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.632990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.633298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.633305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.633609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.633616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.633921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.547 [2024-11-06 13:54:00.633928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.547 qpair failed and we were unable to recover it. 00:29:37.547 [2024-11-06 13:54:00.634229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.634243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.634611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.634618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.634936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.634943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.635166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.635173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.635540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.635547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.635842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.635849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.636156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.636164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.636474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.636480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.636800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.636807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.637109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.637116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.637428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.637434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.637716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.637722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.638048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.638056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.638375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.638381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.638701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.638707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.639039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.639046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.639351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.639364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.639766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.639775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.640123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.640130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.640430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.640437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.640616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.640624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.640927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.640934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.641215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.641222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.641535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.641542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.641866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.641873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.642209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.642215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.642543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.642550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.642847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.642854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.643166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.643173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.643455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.643461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.643757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.643764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.644059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.644066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.644373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.644381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.644683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.644691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.548 [2024-11-06 13:54:00.644996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.548 [2024-11-06 13:54:00.645003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.548 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.645312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.645318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.645626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.645632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.645921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.645929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.646177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.646185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.646519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.646525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.646817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.646824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.647126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.647133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.647441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.647454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.647762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.647769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.648065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.648072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.648402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.648409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.648701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.648714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.649041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.649048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.649350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.649358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.649672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.649679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.649990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.649998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.650303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.650310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.650706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.650712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.651016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.651023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.651330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.651337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.651683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.651691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.652018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.652026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.652332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.652341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.652498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.652506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.652795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.652802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.653122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.653129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.653418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.653424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.653764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.653771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.654105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.549 [2024-11-06 13:54:00.654112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.549 qpair failed and we were unable to recover it. 00:29:37.549 [2024-11-06 13:54:00.654412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.654419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.654719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.654726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.655084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.655094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.655389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.655406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.655719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.655727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.656039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.656047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.656312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.656319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.656540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.656547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.656872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.656880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.657187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.657194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.657482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.657489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.657774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.657781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.658141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.658149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.658409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.658416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.658729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.658737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.658985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.658992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.659309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.659316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.659673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.659681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.659945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.659952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.660161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.660169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.660397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.660405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.660749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.660757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.660959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.660967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.661247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.661254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.661529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.661537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.661819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.661827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.662122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.662130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.662435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.662442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.662749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.662757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.662945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.662953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.663275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.663282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.663624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.663632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.663803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.663811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.664105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.664114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.664302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.550 [2024-11-06 13:54:00.664309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.550 qpair failed and we were unable to recover it. 00:29:37.550 [2024-11-06 13:54:00.664604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.664612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.664958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.664965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.665297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.665305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.665591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.665598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.665925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.665933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.666259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.666267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.666432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.666440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.666743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.666760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.667007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.667014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.667337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.667344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.667667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.667675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.667977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.667984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.668283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.668290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.668557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.668564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.668759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.668766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.668946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.668953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.669276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.669283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.669471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.669478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.669834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.669842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.670050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.670058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.670359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.670366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.670547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.670554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.670890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.670898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.671164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.671171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.671505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.671513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.671829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.671837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.672223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.672230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.672434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.672442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.672752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.672759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.672846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.672853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.673181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.673189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.673507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.673515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.673699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.673707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.673994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.674002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.674181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.674189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.674480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.674487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.551 [2024-11-06 13:54:00.674695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.551 [2024-11-06 13:54:00.674702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.551 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.674921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.674928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.675247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.675257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.675593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.675600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.675787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.675795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.676107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.676115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.676419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.676427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.676625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.676632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.676931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.676939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.677226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.677233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.677537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.677544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.677826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.677835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.678139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.678147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.678486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.678493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.678803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.678811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.679100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.679106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.679426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.679433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.679647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.679654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.679873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.679881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.680204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.680211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.680384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.680390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.680720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.680727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.680924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.680931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.681207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.681214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.681391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.681398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.681623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.681631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.681956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.681963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.682296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.682303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.682593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.682601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.682928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.682935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.683110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.683117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.683424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.683431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.683659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.683666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.683835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.683843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.684158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.684165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.684455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.684469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.684670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.684676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.684753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.684761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.552 [2024-11-06 13:54:00.684975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.552 [2024-11-06 13:54:00.684981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.552 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.685285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.685292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.685593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.685599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.685920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.685927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.686215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.686225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.686305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.686312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.686606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.686612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.686790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.686798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.687136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.687143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.687223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.687229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.687528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.687535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.687818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.687825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.688133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.688140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.688451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.688458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.688756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.688763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.688949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.688957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.689201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.689208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.689518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.689526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.689807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.689814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.689880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.689888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.690159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.690166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.690388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.690395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.690704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.690711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.690879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.690886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.691137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.691144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.691465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.691471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.691635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.691643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.691998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.692005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.692316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.692323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.692664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.692671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.692957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.692964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.693160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.693167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.693331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.693338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.693625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.693632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.693911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.693918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.694224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.694239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.694498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.694505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.694840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.694848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.553 [2024-11-06 13:54:00.695151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.553 [2024-11-06 13:54:00.695158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.553 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.695559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.695566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.695914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.695921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.696224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.696231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.696543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.696550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.696885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.696893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.697179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.697187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.697515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.697522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.697731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.697738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.697926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.697933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.698267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.698273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.698564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.698571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.698782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.698790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.699057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.699064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.699337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.699343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.699645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.699652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.699851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.699858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.700220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.700227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.700501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.700507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.700816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.700823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.701141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.701148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.701353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.701361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.701691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.701698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.701989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.701996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.702166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.702173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.702491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.702497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.702818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.702825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.703169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.703177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.703342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.703349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.703647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.703653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.703924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.703931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.704264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.704270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.704463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.704470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.554 qpair failed and we were unable to recover it. 00:29:37.554 [2024-11-06 13:54:00.704698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.554 [2024-11-06 13:54:00.704714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.705017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.705025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.705305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.705312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.705641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.705648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.705941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.705948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.706273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.706279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.706562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.706569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.706883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.706889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.707182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.707190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.707376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.707383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.707685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.707692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.707975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.707982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.708278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.708285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.708600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.708608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.708923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.708930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.709292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.709298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.709605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.709612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.709914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.709921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.710216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.710222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.710512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.710519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.710815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.710822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.711110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.711117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.711451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.711459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.711780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.711788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.712093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.712099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.712380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.712386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.712699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.712706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.713024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.713031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.713330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.713337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.713672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.713680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.713984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.713991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.714271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.714278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.714572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.714579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.714900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.714908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.715243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.715250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.715580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.715587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.715895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.715902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.716232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.716239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.555 qpair failed and we were unable to recover it. 00:29:37.555 [2024-11-06 13:54:00.716598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.555 [2024-11-06 13:54:00.716606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.716919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.716926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.717231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.717275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.717585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.717592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.717863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.717871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.718162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.718169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.718503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.718509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.718814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.718821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.719121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.719136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.719408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.719415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.719742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.719752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.720035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.720042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.720366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.720373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.720683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.720689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.721010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.721016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.721315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.721324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.721598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.721604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.721915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.721922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.722229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.722236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.722546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.722553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.722884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.722892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.723210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.723218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.723513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.723520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.723821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.723828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.724136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.724144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.724456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.724463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.724753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.724760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.725074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.725081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.725376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.725383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.725689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.725695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.726043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.726051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.726357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.726363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.726672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.726680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.726980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.726987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.727376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.727383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.727689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.727695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.728019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.728026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.728312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.556 [2024-11-06 13:54:00.728319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.556 qpair failed and we were unable to recover it. 00:29:37.556 [2024-11-06 13:54:00.728639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.728646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.728962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.728969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.729273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.729279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.729592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.729598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.729928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.729935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.730226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.730233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.730553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.730561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.730871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.730879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.731091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.731098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.731248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.731255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.731459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.731466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.731626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.731634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.731802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.731810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.732018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.732024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.732399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.732407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.732725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.732732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.733074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.733081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.733370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.733380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.733674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.733681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.734016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.734023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.734332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.734338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.734509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.734516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.734855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.734863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.735150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.735157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.735461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.735467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.735779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.735786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.736072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.736079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.736370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.736377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.736723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.736731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.737086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.737095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.737399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.737406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.737708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.737715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.738102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.738109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.738395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.738401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.738787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.738795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.739123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.739131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.739470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.739825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.739833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.557 qpair failed and we were unable to recover it. 00:29:37.557 [2024-11-06 13:54:00.740151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.557 [2024-11-06 13:54:00.740158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.740367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.740374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.740679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.740686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.741002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.741009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.741217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.741224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.741428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.741436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.741741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.741751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.742126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.742133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.742430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.742437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.742840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.742855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.743186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.743193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.743372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.743380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.743693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.743700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.744128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.744136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.744307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.744314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.744602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.744609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.744939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.744947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.745312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.745319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.745507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.745514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.745815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.745825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.746193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.746201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.746516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.746524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.746751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.746759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.747082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.747089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.747291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.747297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.747529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.747536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.747825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.747832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.748134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.748141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.748354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.748361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.748560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.748567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.748807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.748814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.749079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.749086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.749409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.749416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.749597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.749604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.749913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.749920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.750111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.750118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.750347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.750354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.750667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.750674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.750902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.558 [2024-11-06 13:54:00.750909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.558 qpair failed and we were unable to recover it. 00:29:37.558 [2024-11-06 13:54:00.751219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.751226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.751406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.751413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.751780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.751788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.752126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.752133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.752353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.752360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.752686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.752692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.752909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.752916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.753288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.753295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.753611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.753618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.753960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.753967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.754297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.754304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.754411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.754417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.754739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.754750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.755049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.755057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.755277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.755285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.755594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.755602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.755890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.755898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.756158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.756166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.756441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.756448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.756772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.756779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.757072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.757081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.757382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.757390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.757660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.757668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.757835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.757842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.758022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.758029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.758324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.758331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.758651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.758658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.758967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.758975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.759284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.759291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.759578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.759585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.759899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.759906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.760126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.760133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.760440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.760447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.559 [2024-11-06 13:54:00.760615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.559 [2024-11-06 13:54:00.760622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.559 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.760944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.760952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.761283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.761290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.761599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.761606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.761889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.761896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.762204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.762211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.762517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.762524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.762799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.762806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.763163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.763171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.763484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.763491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.763787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.763794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.763994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.764002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.764331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.764338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.764541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.764548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.764875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.764882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.765070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.765078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.765457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.765464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.765767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.765774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.766069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.766076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.766287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.766294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.766607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.766614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.766859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.766866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.767193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.767200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.767513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.767520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.767692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.767701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.767863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.767871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.768153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.768160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.768437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.768446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.768758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.768765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.769146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.769153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.769335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.769342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.769543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.769550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.769829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.769836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.770144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.770151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.770464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.770472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.770800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.770807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.771031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.771038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.771305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.771312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.771654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.560 [2024-11-06 13:54:00.771661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.560 qpair failed and we were unable to recover it. 00:29:37.560 [2024-11-06 13:54:00.771958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.771965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.772260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.772267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.772583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.772591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.772771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.772779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.773075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.773082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.773486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.773494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.773804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.773811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.774152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.774160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.774479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.774485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.774678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.774685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.774870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.774878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.775122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.775129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.775319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.775326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.775616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.775623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.775935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.775942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.776153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.776160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.776475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.776483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.776814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.776821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.777172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.777179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.777507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.777514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.777704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.777711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.777990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.778005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.778356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.778363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.778559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.778566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.778856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.778863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.779167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.779174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.779496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.779502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.779828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.779837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.780057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.780065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.780382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.780389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.780697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.780704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.780918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.780926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.781089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.781096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.781389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.781396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.781668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.781676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.781968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.781975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.782284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.782297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.561 [2024-11-06 13:54:00.782487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.561 [2024-11-06 13:54:00.782495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.561 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.782813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.782821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.783197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.783204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.783519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.783526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.783844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.783852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.784140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.784147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.784486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.784493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.784817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.784825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.785153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.785159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.785478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.785485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.785681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.785688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.785890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.785897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.786223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.786231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.786545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.786552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.786767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.786774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.787064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.787072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.787390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.787397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.787711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.787718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.787891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.787900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.788234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.788242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.788524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.788531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.788857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.788865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.789236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.789243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.789567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.789574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.789870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.789877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.790067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.790075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.790456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.790463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.790777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.790784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.790965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.790973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.791286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.791293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.791591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.791597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.791915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.791922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.792225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.792233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.792535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.792543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.792898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.792905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.793269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.793275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.793590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.793597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.793915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.793922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.794095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-06 13:54:00.794102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.562 qpair failed and we were unable to recover it. 00:29:37.562 [2024-11-06 13:54:00.794317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.794324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.794497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.794504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.794819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.794827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.795114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.795121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.795487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.795494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.795778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.795785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.796102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.796109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.796474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.796481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.796814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.796821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.797103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.797110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.797457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.797464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.797874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.797881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.798194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.798200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.798499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.798506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.798836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.798844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.799062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.799069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.799387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.799393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.799602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.799609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.799915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.799922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.800243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.800253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.800449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.800456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.800630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.800638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.800934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.800941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.801273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.801280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.801588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.801595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.801805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.801812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.802122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.802128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.802324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.802331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.802557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.802563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.802874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.802880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.803080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.803088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.803389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.803396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.803704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-06 13:54:00.803711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.563 qpair failed and we were unable to recover it. 00:29:37.563 [2024-11-06 13:54:00.803907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.803915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.804221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.804227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.804521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.804528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.804816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.804823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.805040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.805046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.805218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.805226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.805531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.805537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.805859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.805866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.806033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.806040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.806269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.806276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.806621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.806628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.806959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.806966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.807286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.807293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.807606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.807613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.807815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.807822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.808149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.808156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.808332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.808339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.808701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.808708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.809051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.809058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.809372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.809380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.809672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.809679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.809964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.809972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.810152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.810160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.810429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.810438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.810755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.810764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.811064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.811072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.811405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.811414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.811732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.811741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.811956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.811964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.812304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.812313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.812628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.812636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.812919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.812927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.813253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.813260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.813568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.813576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.813917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.813925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.814211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.814219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.814524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.814532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.814752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.814760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.564 qpair failed and we were unable to recover it. 00:29:37.564 [2024-11-06 13:54:00.814930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-06 13:54:00.814937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.815259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.815266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.815562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.815571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.815882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.815890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.816225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.816234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.816541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.816549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.816865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.816873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.817181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.817189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.817476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.817484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.817880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.817888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.818219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.818228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.818506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.818513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.818822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.818829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.819186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.819194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.819493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.819502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.819813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.819821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.820119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.820128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.820430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.820437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.820759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.820767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.821044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.821052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.821346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.821353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.821671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.821679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.821980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.821989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.822312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.822321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.822723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.822732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.823023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.823031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.823354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.823361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.823592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.823599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.823933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.823942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.824146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.824154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.824474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.824482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.824799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.824807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.825129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.825137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.825375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.825383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.825707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.825715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.826037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.826045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.826335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.826343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.826649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.826657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.565 [2024-11-06 13:54:00.826850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.565 [2024-11-06 13:54:00.826859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.565 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.827169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.827177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.827490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.827497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.827798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.827807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.828134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.828142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.828490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.828498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.828835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.828845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.829169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.829177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.829529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.829537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.829848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.829856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.830141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.830149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.830456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.830466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.830757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.830766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.831047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.831056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.831349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.831357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.831545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.831552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.831870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.831878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.832176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.832184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.832493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.832501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.832813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.832822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.832994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.833002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.833307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.833315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.833648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.833658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.834010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.834018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.834210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.834218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.834516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.834525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.834760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.834769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.835106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.835113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.835420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.835428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.835591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.835599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.835881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.835890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.836183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.836191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.836478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.836486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.836795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.836804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.837140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.837148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.837463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.837470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.837661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.837668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.566 qpair failed and we were unable to recover it. 00:29:37.566 [2024-11-06 13:54:00.837948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.566 [2024-11-06 13:54:00.837956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.838275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.838282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.838597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.838604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.838795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.838804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.839109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.839118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.839436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.839445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.839752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.839761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.840076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.840084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.840264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.840271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.840441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.840449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.840728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.840737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.841031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.841040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.841336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.841344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.841664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.841672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.841984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.841992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.842301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.842309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.842624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.842632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.842914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.842922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.843251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.843261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.843595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.843604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.843916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.843924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.844141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.844148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.844308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.844315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.844608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.844616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.844818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.844827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.845002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.845009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.845213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.845220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.845562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.845570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.845887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.845896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.846238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.846246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.846536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.846545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.846871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.846879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.847074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.847081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.847477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.847486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.847789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.847797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.848117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.848126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.848415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.848423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.848644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.848653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.567 [2024-11-06 13:54:00.849007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.567 [2024-11-06 13:54:00.849016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.567 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.849345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.849353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.849668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.849677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.849893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.849901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.850227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.850236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.850524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.850533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.850844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.850853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.851032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.851040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.851373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.851381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.851600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.851609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.851888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.851897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.852197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.852206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.852513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.852521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.852836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.852844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.853152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.853160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.853483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.853491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.853803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.853811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.854156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.854164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.854476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.854484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.854658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.854666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.854964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.854973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.855261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.855269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.855578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.855586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.855888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.855896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.856179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.856187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.856483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.856492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.856803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.856811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.857493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.857511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.857814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.857824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.858146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.858154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.858360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.858368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.858680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.858688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.858969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.858978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.859320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.859328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.568 qpair failed and we were unable to recover it. 00:29:37.568 [2024-11-06 13:54:00.859540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.568 [2024-11-06 13:54:00.859549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.859855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.859868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.860197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.860206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.860550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.860558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.860847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.860856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.861193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.861200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.861508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.861515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.861883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.861891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.862215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.862232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.862558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.862567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.862932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.862941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.863229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.863238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.863519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.863526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.863831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.863839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.864053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.864061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.864226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.864235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.864424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.864432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.864591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.864598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.864872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.864880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.865220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.865229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.865539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.865547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.865834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.865843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.866149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.866159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.866458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.866466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.867160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.867178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.867461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.867470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.867776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.867785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.868109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.868117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.868423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.868431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.868731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.868738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.869061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.869070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.869375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.869384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.869731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.869740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.870068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.870077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.870385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.870393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.870665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.870673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.870984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.870992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.871328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.871336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.569 [2024-11-06 13:54:00.871673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.569 [2024-11-06 13:54:00.871682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.569 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.871976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.871984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.872319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.872327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.872632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.872642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.872989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.872997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.873326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.873335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.873537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.873545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.873840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.873849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.874162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.874170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.874472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.874481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.874770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.874778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.875101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.875110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.875423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.875432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.875764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.875773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.876118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.876126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.876331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.876338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.876642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.876650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.876985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.876993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.877284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.877292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.877607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.877616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.877921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.877930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.878241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.878251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.878565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.878573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.878799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.878807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.879106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.879114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.879415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.879423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.879762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.879771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.879962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.879971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.880271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.880280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.880463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.880472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.880805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.880813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.881048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.881056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.881231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.881239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.881489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.881497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.881808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.881817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.882039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.882047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.882318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.882326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.882644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.882652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-11-06 13:54:00.882972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-11-06 13:54:00.882981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.883313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.883321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.883617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.883625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.883931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.883939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.884228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.884236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.884502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.884512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.884808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.884816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.885138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.885147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.885433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.885441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.885759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.885768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.886045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.886054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.886381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.886390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.886684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.886692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.886970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.886979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.887169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.887178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.887466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.887475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.887744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.887756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.888074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.888082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.888401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.888410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.888681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.888689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.888908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.888916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.889216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.889224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.889419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.889427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.889760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.889768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.890107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.890116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.890420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.890428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.890750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.890759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.891095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.891103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.891442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.891450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.891763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.891771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.892187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.892195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.892475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.892483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.892695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.892704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.893020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.893028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.893343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.893351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-11-06 13:54:00.893653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-11-06 13:54:00.893662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.894057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.894065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.894365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.894375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.894692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.894701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.894904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.894911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.895256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.895265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.895585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.895594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.895909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.895917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.896302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.896310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.896601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.896609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.896918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.896928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.897277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.897285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.897582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.897590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.897898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.897906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.898238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.898246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.898567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.898575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.898843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.898851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.899184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.899192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.899403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.899411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.899733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.899741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.900043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.900051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.900318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.900326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.900584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.900591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.900782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.900791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.901002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.901010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.901305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.901312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.901596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.901604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.901959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.901967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.902290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.902298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.902627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.902635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.903039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-11-06 13:54:00.903047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-11-06 13:54:00.903351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-11-06 13:54:00.903359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-11-06 13:54:00.903681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-11-06 13:54:00.903689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-11-06 13:54:00.903861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-11-06 13:54:00.903870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-11-06 13:54:00.903987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-11-06 13:54:00.903994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-11-06 13:54:00.904312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.904322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.904624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.904634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.905045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.905054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.905244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.905253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.905566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.905575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.905869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.905877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.906193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.906200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.906371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.906379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.906650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.906658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.906966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.906974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.907291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.907299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.907565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.907573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.907865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.907873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.908094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.908102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.908422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.908430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.908632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.908642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.908856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.908866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.909187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.850 [2024-11-06 13:54:00.909196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.850 qpair failed and we were unable to recover it. 00:29:37.850 [2024-11-06 13:54:00.909476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.909485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.909645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.909654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.909954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.909962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.910273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.910280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.910547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.910555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.910861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.910868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.911194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.911203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.911386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.911394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.911721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.911732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.911923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.911932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.912280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.912289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.912599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.912608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.912908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.912916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.913206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.913223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.913524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.913531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.913818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.913826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.914080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.914088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.914388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.914397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.914557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.914565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.914845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.914853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.915207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.915215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.915522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.915531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.915785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.915794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.916128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.916136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.916441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.916449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.916679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.916687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.916993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.917001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.917208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.917216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.917379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.917387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.917597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.917604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.917855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.917863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.918171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.918178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.918508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.918517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.918814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.918822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.919006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.919013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.919267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.919275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.919605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.919613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.851 qpair failed and we were unable to recover it. 00:29:37.851 [2024-11-06 13:54:00.919923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.851 [2024-11-06 13:54:00.919934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.920235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.920243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.920504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.920512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.920803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.920811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.921094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.921102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.921412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.921419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.921732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.921741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.921958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.921966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.922285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.922295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.922611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.922619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.922919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.922927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.923250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.923258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.923450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.923457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.923798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.923806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.924223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.924232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.924522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.924530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.924785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.924793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.925075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.925083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.925362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.925369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.925551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.925558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.925818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.925827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.926026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.926034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.926343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.926352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.926675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.926683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.927081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.927089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.927400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.927408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.927739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.927751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.927962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.927970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.928303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.928312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.928611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.928620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.928928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.928936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.929168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.929176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.929481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.929488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.929779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.929787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.930064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.930073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.930369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.930378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.930703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.930712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.852 qpair failed and we were unable to recover it. 00:29:37.852 [2024-11-06 13:54:00.930954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.852 [2024-11-06 13:54:00.930962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.931226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.931235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.931547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.931555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.931773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.931783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.932081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.932089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.932367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.932375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.932698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.932705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.932989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.932998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.933274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.933282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.933596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.933605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.933914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.933923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.934242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.934251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.934619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.934627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.934840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.934848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.935117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.935125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.935334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.935342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.935556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.935565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.935885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.935893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.936106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.936114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.936422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.936431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.936795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.936803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.937088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.937096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.937378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.937386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.937689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.937698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.938030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.938038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.938349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.938358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.938655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.938662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.938921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.938929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.939233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.939242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.939571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.939579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.939783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.939793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.940002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.940010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.940267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.940276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.940576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.940584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.940858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.940866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.941251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.941259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.941504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.941512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.941763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.853 [2024-11-06 13:54:00.941771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.853 qpair failed and we were unable to recover it. 00:29:37.853 [2024-11-06 13:54:00.942105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.942113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.942422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.942430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.942760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.942770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.943094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.943102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.943371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.943379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.943712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.943720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.943918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.943926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.944244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.944253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.944583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.944592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.944891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.944900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.945213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.945222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.945524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.945532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.945840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.945849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.946153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.946161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.946374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.946382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.946692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.946700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.946887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.946894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.947212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.947219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.947491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.947499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.947819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.947827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.948009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.948016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.948315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.948323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.948661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.948670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.948965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.948973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.949322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.949330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.949659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.949667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.949916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.949925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.950220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.950228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.950542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.950551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.950864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.950872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.951192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.951200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.951512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.951520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.951816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.951826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.952164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.952171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.952478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.952487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.952784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.952793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.953107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.953115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.854 [2024-11-06 13:54:00.953446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.854 [2024-11-06 13:54:00.953454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.854 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.953755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.953763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.954051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.954058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.954356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.954365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.954644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.954652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.954969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.954977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.955281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.955289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.955533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.955540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.955729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.955737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.955977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.955986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.956220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.956227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.956561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.956569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.956789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.956798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.957091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.957099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.957379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.957387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.957709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.957717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.958074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.958082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.958381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.958390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.958659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.958667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.958953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.958961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.959254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.959262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.959569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.959578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.959884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.959893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.960194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.960203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.960536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.960545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.960865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.960873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.961247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.961255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.961555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.961564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.961752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.961761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.962050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.962058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.962383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.962392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.962693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.855 [2024-11-06 13:54:00.962701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.855 qpair failed and we were unable to recover it. 00:29:37.855 [2024-11-06 13:54:00.963032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.963042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.963302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.963310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.963625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.963634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.963940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.963949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.964152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.964160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.964437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.964445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.964655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.964662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.964990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.964999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.965299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.965307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.965615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.965622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.965923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.965931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.966211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.966219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.966508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.966516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.966812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.966820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.967069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.967076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.967376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.967384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.967575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.967584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.967902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.967911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.968237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.968246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.968567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.968576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.968874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.968882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.969194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.969202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.969499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.969507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.969814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.969822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.970153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.970161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.970460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.970468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.970778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.970786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.971186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.971193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.971524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.971534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.971795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.971803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.972103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.972111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.972419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.972427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.972754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.972763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.973091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.973098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.973394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.973403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.973583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.973590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.973884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.856 [2024-11-06 13:54:00.973892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.856 qpair failed and we were unable to recover it. 00:29:37.856 [2024-11-06 13:54:00.974217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.974225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.974414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.974421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.974714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.974722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.975030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.975038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.975322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.975330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.975633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.975641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.975877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.975887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.976173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.976181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.976481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.976489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.976754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.976762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.976959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.976967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.977261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.977269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.977602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.977611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.977905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.977913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.978214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.978222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.978553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.978560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.978925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.978934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.979286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.979293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.979593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.979601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.979786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.979794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.979932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.979940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.980251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.980260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.980523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.980531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.980864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.980872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.981171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.981179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.981443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.981451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.981759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.981767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.982083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.982091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.982401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.982408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.982724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.982733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.982940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.982949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.983248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.983258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.983608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.983617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.983916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.983925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.984251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.984260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.984465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.984474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.984783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.984791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.985087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.857 [2024-11-06 13:54:00.985095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.857 qpair failed and we were unable to recover it. 00:29:37.857 [2024-11-06 13:54:00.985415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.985424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.985595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.985605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.985901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.985910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.986220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.986229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.986540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.986547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.986779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.986787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.987091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.987098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.987414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.987424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.987770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.987780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.987995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.988002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.988369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.988377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.988681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.988689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.988899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.988908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.989250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.989266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.989587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.989595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.989781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.989789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.990130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.990138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.990425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.990433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.990729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.990737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.991053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.991061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.991247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.991255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.991562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.991570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.991774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.991784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.992127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.992135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.992320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.992327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.992599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.992607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.992789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.992798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.993081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.993089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.993396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.993404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.993692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.993700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.994029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.994037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.994237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.994245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.994559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.994568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.994877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.994886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.995191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.995207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.995504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.995512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.995817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.995825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.996087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.858 [2024-11-06 13:54:00.996095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.858 qpair failed and we were unable to recover it. 00:29:37.858 [2024-11-06 13:54:00.996273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.996281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.996600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.996607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.996820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.996828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.997122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.997130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.997341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.997349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.997657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.997665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.997975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.997984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.998347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.998355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.998595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.998603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.998904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.998912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.999099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.999109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.999423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.999432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.999634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.999643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:00.999971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:00.999980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.000279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.000289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.000577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.000586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.000990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.000998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.001296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.001306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.001619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.001627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.001985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.001994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.002298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.002307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.002512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.002519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.002776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.002784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.003153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.003161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.003497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.003506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.003818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.003826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.004114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.004122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.004462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.004470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.004733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.004741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.004987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.004995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.005329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.005338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.005639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.005647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.859 qpair failed and we were unable to recover it. 00:29:37.859 [2024-11-06 13:54:01.005952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.859 [2024-11-06 13:54:01.005961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.006284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.006293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.006598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.006607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.006957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.006965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.007269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.007277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.007579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.007587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.007854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.007863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.008171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.008178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.008477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.008487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.008711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.008720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.008991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.009000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.009335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.009344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.009622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.009631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.009966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.009974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.010249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.010257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.010542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.010550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.010829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.010838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.011154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.011163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.011461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.011471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.011759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.011767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.012140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.012148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.012445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.012453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.012809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.012817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.013022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.013030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.013337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.013345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.013637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.013645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.014035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.014043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.014375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.014384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.014647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.014655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.014971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.014979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.015281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.015289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.015573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.015581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.015817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.015825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.016247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.016255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.016562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.016570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.016857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.016865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.017179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.860 [2024-11-06 13:54:01.017186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.860 qpair failed and we were unable to recover it. 00:29:37.860 [2024-11-06 13:54:01.017453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.017460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.017635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.017644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.017963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.017971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.018294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.018303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.018578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.018586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.018931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.018939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.019307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.019316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.019621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.019628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.019810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.019818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.020163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.020172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.020477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.020485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.020681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.020689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.021058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.021066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.021366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.021375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.021685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.021693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.022034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.022042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.022358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.022366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.022688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.022696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.023039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.023048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.023348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.023356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.023688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.023696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.024063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.024073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.024376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.024385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.024706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.024715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.024952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.024960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.025261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.025269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.025501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.025510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.025793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.025801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.026123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.026131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.026496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.026505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.026857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.026867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.027121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.027130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.027348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.027357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.027684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.027693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.028034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.028043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.028229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.028238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.028552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.028561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.028926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.028935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.861 [2024-11-06 13:54:01.029238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.861 [2024-11-06 13:54:01.029247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.861 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.029551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.029560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.029776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.029785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.030105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.030113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.030446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.030455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.030517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.030525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.030755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.030764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.030989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.030998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.031317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.031326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.031644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.031653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.031982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.031991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.032307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.032315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.032625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.032633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.032996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.033006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.033254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.033263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.033600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.033609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.033828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.033838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.034074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.034084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.034372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.034381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.034682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.034691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.035054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.035063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.035358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.035367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.035529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.035538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.035849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.035860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.036177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.036187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.036399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.036409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.036687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.036696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.036935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.036944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.037269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.037277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.037569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.037578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.037779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.037789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.038086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.038095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.038271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.038279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.038604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.038613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.038801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.038811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.039047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.039056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.039376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.039384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.039665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.039674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-11-06 13:54:01.040030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-11-06 13:54:01.040039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.040366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.040376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.040570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.040579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.040763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.040772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.041150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.041158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.041455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.041464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.041766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.041775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.041857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.041864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.042229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.042237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.042548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.042557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.042866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.042874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.043133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.043141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.043426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.043434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.043752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.043760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.043993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.044001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.044302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.044311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.044621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.044629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.044933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.044941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.045236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.045244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.045530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.045538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.045702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.045710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.046019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.046028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.046363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.046372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.046675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.046683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.046803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.046811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.047172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.047182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.047481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.047489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.047793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.047801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.048024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.048032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.048235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.048242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.048442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.048450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.048758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.048767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.049057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.049065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.049388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.049396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.049678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.049686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.050095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.050105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.050410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.050419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.050603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.050612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-11-06 13:54:01.050845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-11-06 13:54:01.050854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.051143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.051151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.051493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.051502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.051821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.051829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.052148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.052156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.052365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.052373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.052683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.052691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.053010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.053018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.053334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.053343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.053647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.053655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.053998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.054007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.054309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.054317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.054645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.054654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.054741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.054755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.055068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.055076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.055393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.055402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.055615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.055623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.055948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.055956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.056314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.056322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.056530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.056538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.056873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.056881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.057137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.057145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.057431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.057440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.057744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.057758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.058052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.058060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.058371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.058380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.058730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.058737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.058979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.058989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.059313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.059321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.059624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.059633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.059953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.059962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.060168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.060177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.060446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.060455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-11-06 13:54:01.060717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-11-06 13:54:01.060726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.061045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.061054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.061302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.061311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.061613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.061621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.061921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.061930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.062254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.062263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.062534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.062543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.062819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.062828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.063182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.063190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.063530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.063538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.063883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.063892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.064155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.064163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.064393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.064401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.064620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.064629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.065004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.065014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.065354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.065362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.065679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.065688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.066032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.066040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.066347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.066355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.066695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.066702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.067009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.067018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.067309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.067317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.067637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.067646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.068005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.068014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.068313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.068321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.068658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.068666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.068879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.068887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.069268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.069277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.069578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.069587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.069761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.069770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.069973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.069981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.070305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.070314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.070617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.070626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.070953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.070963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.071326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.071335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.071529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.071537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.071916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.071924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.072190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.072198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-11-06 13:54:01.072542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-11-06 13:54:01.072551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.072849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.072857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.073176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.073184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.073247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.073254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.073552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.073560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.073885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.073895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.074128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.074137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.074449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.074457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.074652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.074661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.074916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.074925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.075260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.075269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.075605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.075613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.075833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.075841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.076147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.076154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.076454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.076462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.076546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.076554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.076791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.076799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.077029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.077037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.077360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.077369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.077543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.077552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.077780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.077788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.078140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.078149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.078425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.078434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.078773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.078781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.079149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.079157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.079349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.079356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.079641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.079657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.079853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.079862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.080151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.080159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.080453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.080460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.080634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.080643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.080837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.080845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.081144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.081152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.081367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.081376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.081696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.081704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.081869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.081877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.082203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.082213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.082523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.082531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.082715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-11-06 13:54:01.082724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-11-06 13:54:01.082927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.082937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.083281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.083290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.083595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.083602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.083809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.083817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.084195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.084203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.084540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.084548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.084794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.084802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.085131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.085139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.085423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.085431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.085697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.085705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.085849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.085858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.086156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.086173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.086349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.086357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.086680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.086689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.087044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.087052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.087365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.087372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.087580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.087588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.087875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.087883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.088263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.088271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.088566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.088573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.088774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.088782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.089147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.089155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.089489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.089497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.089824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.089832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.090096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.090105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.090412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.090420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.090609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.090617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.090809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.090818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.091166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.091173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.091510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.091518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.091857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.091866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.092169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.092178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.092479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.092488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.092796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.092805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.093209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.093217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.093533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.093541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.093827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.093836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.094040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-11-06 13:54:01.094048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-11-06 13:54:01.094348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.094356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.094678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.094686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.094967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.094975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.095288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.095295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.095640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.095648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.096009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.096017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.096179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.096186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.096485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.096494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.096771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.096779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.096999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.097007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.097193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.097203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.097518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.097525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.097842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.097850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.098149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.098157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.098461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.098470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.098637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.098646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.098979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.098988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.099273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.099282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.099595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.099603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.099924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.099933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.100276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.100284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.100585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.100593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.100907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.100916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.101264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.101272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.101603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.101611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.101916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.101924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.102225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.102237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.102429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.102438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.102706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.102715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.103030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.103038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.103368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.103377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.103677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.103685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.103995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.104004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-11-06 13:54:01.104273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-11-06 13:54:01.104281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.104589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.104598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.105010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.105018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.105309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.105317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.105486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.105495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.105572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.105581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.105889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.105898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.106172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.106181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.106408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.106418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.106744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.106757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.106947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.106955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.107167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.107174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.107505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.107513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.107818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.107826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.108138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.108146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.108452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.108460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.108758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.108768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.109068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.109075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.109380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.109388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.109573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.109582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.109883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.109891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.110232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.110240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.110573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.110581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.110882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.110890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.111069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.111076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.111296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.111305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.111606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.111613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.111918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.111926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.112167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.112175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.112488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.112496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.112805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.112813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.113126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.113134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.113342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.113351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.113670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.113681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.113991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.114000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.114283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.114291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.114621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.114629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.114933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-11-06 13:54:01.114941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-11-06 13:54:01.115262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.115270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.115574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.115582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.115917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.115926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.116200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.116209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.116514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.116523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.116809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.116817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.117024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.117032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.117212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.117220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.117529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.117538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.117804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.117812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.118118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.118125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.118424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.118432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.118610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.118618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.118919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.118927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.119249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.119257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.119588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.119596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.119917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.119925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.120276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.120286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.120592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.120601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.120886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.120895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.121075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.121396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.121405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.121703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.121712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.122050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.122058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.122327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.122335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.122520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.122529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.122850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.122858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.123198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.123207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.123501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.123509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.123808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.123816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.124118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.124126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.124441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.124450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.124757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.124766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.125048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.125056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.125343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.125352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.125678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.125689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.126020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-11-06 13:54:01.126028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-11-06 13:54:01.126327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.126335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.126675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.126684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.126953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.126961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.127237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.127245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.127557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.127566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.127900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.127909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.128193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.128201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.128406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.128414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.128675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.128683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.128976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.128985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.129252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.129261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.129565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.129575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.129930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.129939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.130120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.130127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.130339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.130347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.130622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.130630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.130929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.130937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.131229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.131237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.131552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.131560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.131897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.131905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.132233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.132240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.132573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.132581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.132900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.132908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.133102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.133111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.133461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.133470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.133790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.133800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.134104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.134112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.134483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.134492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.134811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.134820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.135156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.135164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.135447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.135455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.135764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.135772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.136096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.136105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.136429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.136438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.136743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.136755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.137086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.137093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.137277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.137284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-11-06 13:54:01.137603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-11-06 13:54:01.137613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.137914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.137925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.138246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.138254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.138559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.138568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.138898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.138906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.139099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.139107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.139426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.139433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.139602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.139609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.139827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.139835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.140158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.140166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.140480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.140488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.140770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.140779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.141110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.141118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.141432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.141441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.141753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.141762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.142079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.142088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.142385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.142393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.142591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.142599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.142801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.142810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.143122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.143130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.143460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.143469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.143791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.143799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.144125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.144133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.144452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.144461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.144660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.144668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.144874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.144881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.145183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.145191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.145472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.145480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.145781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.145790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.146090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.146098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.146414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.146422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.146782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.146790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.147124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.147132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.147442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.147450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.148148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.148166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.148468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.148478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.148633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.148642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.148912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.148921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-11-06 13:54:01.149250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-11-06 13:54:01.149259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.149456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.149463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.149586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.149594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.149780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.149792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.150076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.150084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.150275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.150283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.150592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.150600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.150910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.150919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.151226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.151235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.151540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.151548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.151823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.151832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.152120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.152128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.152434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.152443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.152756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.152766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.153578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.153597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.153956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.153965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.154638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.154654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.154859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.154869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.155148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.155156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.155363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.155370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.155688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.155696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.155988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.155996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.156291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.156299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.156599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.156608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.156913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.156922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.157241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.157250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.157534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.157543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.157855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.157863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.158193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.158202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.158509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.158518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.158878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.158887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.159720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.159739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.160043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.160054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-11-06 13:54:01.160355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-11-06 13:54:01.160364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.160651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.160659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.160954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.160962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.161309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.161318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.161601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.161609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.161923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.161932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.162256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.162265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.162441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.162450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.162622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.162631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.162836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.162845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.163180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.163192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.163502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.163510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.163817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.163825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.164123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.164132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.164451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.164459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.164630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.164638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.164916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.164925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.165224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.165232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.165538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.165546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.165866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.165875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.166193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.166201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.166489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.166497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.166801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.166810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.167124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.167132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.167446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.167454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.167754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.167762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.168100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.168108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.168266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.168275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.168585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.168593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.168982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.168991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.169295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.169303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.169614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.169623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.169922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.169931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.170242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.170251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.170562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.170571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.170858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.170867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.171188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.171195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.171544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-11-06 13:54:01.171553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-11-06 13:54:01.171839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.171847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.172168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.172177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.172483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.172490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.172797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.172806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.173139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.173147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.173455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.173463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.173791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.173800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.174105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.174113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.174417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.174425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.174703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.174712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.175007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.175015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.175840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.175859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.176042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.176053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.176327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.176335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.176643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.176652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.176964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.176973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.177274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.177283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.177578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.177586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.177873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.177881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.178156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.178164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.178470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.178479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.178662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.178670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.178987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.178994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.179187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.179196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.179502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.179511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.179810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.179818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.180156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.180166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.180494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.180502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.180806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.180814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.181176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.181184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.181489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.181498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.181795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.181804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.182151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.182160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.182470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.182478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.182782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.182791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.183002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.183010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.183333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.875 [2024-11-06 13:54:01.183342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-11-06 13:54:01.183657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.183665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.183967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.183975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.184282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.184289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.184621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.184629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.184963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.184971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.185309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.185318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.185492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.185501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.185810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.185819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.186123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.186132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.186439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.186447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.186733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.186741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.187120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.187129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.187436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.187445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.187754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.187763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.188044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.188052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.188414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.188425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.188804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.188812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.189116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.189124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.189449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.189456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.189763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.189773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.190081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.190089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.190393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.190402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.190601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.190608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.190933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.190941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.191268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.191276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.191628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.191636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.191923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.191931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.192138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.192145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.192442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.192450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.192742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.192755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.193065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.193073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.193269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.193277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.193549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.193557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.193880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.193888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.194234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.194243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.194421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.194430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.194763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.194771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.876 [2024-11-06 13:54:01.195087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.876 [2024-11-06 13:54:01.195096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.876 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.195428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.195436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.195726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.195735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.196013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.196022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.196225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.196232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.196550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.196559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.196749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.196759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.197028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.197036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.197340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.197348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.197691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.197699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.198032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.198040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.198248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.198256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.198574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.198583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.198908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.198917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.199236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.199245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.199554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.199563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.199882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.199892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.200181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.200189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.200485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.200496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.200803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.200811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.201175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.201183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.201576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.201584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.201898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.201906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.202211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.202220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.202530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.202539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.202865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.202874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.203146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.203154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.203464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.203472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.203653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.203661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.203962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.203971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.204283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.204292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.204600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.204608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.204913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.204920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.205235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.205244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.205547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.205554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.205855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.205863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.206173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.206181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.206470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.206478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.877 [2024-11-06 13:54:01.206798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.877 [2024-11-06 13:54:01.206806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.877 qpair failed and we were unable to recover it. 00:29:37.878 [2024-11-06 13:54:01.207117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.878 [2024-11-06 13:54:01.207124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.878 qpair failed and we were unable to recover it. 00:29:37.878 [2024-11-06 13:54:01.207435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.878 [2024-11-06 13:54:01.207443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.878 qpair failed and we were unable to recover it. 00:29:37.878 [2024-11-06 13:54:01.207730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.878 [2024-11-06 13:54:01.207738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.878 qpair failed and we were unable to recover it. 00:29:37.878 [2024-11-06 13:54:01.207969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.878 [2024-11-06 13:54:01.207978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:37.878 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.208281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.208290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.208596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.208605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.209520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.209539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.209849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.209859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.210164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.210172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.210478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.210486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.210790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.210798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.211114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.211123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.211455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.211463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.211800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.155 [2024-11-06 13:54:01.211810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.155 qpair failed and we were unable to recover it. 00:29:38.155 [2024-11-06 13:54:01.212066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.212074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.212409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.212417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.212728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.212736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.213032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.213040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.213372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.213380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.213681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.213692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.214000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.214008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.214317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.214326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.214663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.214671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.214893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.214900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.215177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.215185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.215492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.215500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.215680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.215688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.215986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.215995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.216302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.216310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.216589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.216597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.216888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.216896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.217230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.217238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.217538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.217547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.217847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.217856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.218200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.218209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.218510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.218519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.218804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.218812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.219122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.219130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.219317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.219325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.219635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.219643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.219969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.219978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.220250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.220259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.220545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.220553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.220761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.220768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.221049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.221057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.221395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.221404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.221703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.221712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.222051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.222060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.222374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.222382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.222743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.222754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.223057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.156 [2024-11-06 13:54:01.223065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.156 qpair failed and we were unable to recover it. 00:29:38.156 [2024-11-06 13:54:01.223369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.223377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.223710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.223719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.224072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.224080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.224406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.224415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.224743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.224756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.225051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.225060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.225376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.225384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.225718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.225727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.226042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.226053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.226315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.226324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.226622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.226631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.226925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.226933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.227296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.227304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.227602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.227610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.227810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.227819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.228170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.228178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.228503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.228511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.228766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.228774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.228966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.228974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.229261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.229268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.229534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.229542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.229845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.229853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.230197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.230207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.230490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.230498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.230823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.230833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.231140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.231148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.231463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.231472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.231824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.231833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.232143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.232151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.232462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.232469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.232811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.232820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.233154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.233162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.233465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.233475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.233802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.233811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.234157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.234166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.234496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.234505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.234817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.157 [2024-11-06 13:54:01.234832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.157 qpair failed and we were unable to recover it. 00:29:38.157 [2024-11-06 13:54:01.235137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.235145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.235446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.235454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.235773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.235782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.236100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.236108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.236394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.236402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.236711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.236719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.237036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.237046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.237360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.237368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.237646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.237654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.237857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.237866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.238157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.238165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.238350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.238360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.238678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.238687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.238987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.238995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.239292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.239301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.239607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.239615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.239890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.239899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.240238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.240246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.240543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.240551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.240769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.240778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.241085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.241093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.241398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.241406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.241728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.241736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.242040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.242048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.242367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.242375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.242676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.242684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.242991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.243000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.243203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.243212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.243480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.243489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.243819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.243827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.244109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.244117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.244438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.244446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.244757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.244765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.245110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.245118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.245303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.245313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.245621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.245629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.245922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.245930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.246148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.246157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.158 qpair failed and we were unable to recover it. 00:29:38.158 [2024-11-06 13:54:01.246453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.158 [2024-11-06 13:54:01.246462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.246787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.246795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.247023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.247039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.247325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.247334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.247624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.247632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.248024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.248032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.248339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.248348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.248656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.248665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.248971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.248979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.249286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.249295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.249600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.249609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.249924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.249932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.250233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.250241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.250556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.250564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.250786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.250794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.251024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.251033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.251306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.251314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.251628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.251637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.251942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.251951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.252263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.252272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.252602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.252610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.252923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.252932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.253240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.253248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.253528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.253536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.253841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.253849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.254149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.254157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.254459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.254467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.254649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.254658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.254981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.254990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.255318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.255326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.255625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.255633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.255944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.255953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.256245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.256253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.256556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.256564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.256889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.256897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.257203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.257210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.257388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.257398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.257685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.159 [2024-11-06 13:54:01.257693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.159 qpair failed and we were unable to recover it. 00:29:38.159 [2024-11-06 13:54:01.258038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.258046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.258347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.258355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.258666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.258675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.258973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.258982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.259196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.259204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.259431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.259440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.259734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.259743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.259863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.259871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.260076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.260084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.260387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.260395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.260706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.260715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.261034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.261043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.261344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.261353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.261454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.261463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.261730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.261739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.262059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.262069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.262363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.262372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.262584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.262592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.262860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.262869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.263217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.263225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.263542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.263550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.263771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.263779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.264161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.264169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.264468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.264476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.264779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.264788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.265018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.265026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.265340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.265348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.265647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.265655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.265854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.265862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.266157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.266165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.266493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.160 [2024-11-06 13:54:01.266501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.160 qpair failed and we were unable to recover it. 00:29:38.160 [2024-11-06 13:54:01.266819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.266827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.267165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.267173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.267475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.267484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.267783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.267792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.268026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.268033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.268364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.268372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.268554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.268563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.268866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.268874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.269053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.269061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.269391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.269398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.269718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.269726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.270026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.270036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.270238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.270246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.270534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.270542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.270856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.270864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.271181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.271189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.271494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.271501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.271806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.271814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.272084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.272092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.272376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.272383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.272752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.272760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.273050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.273057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.273403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.273410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.273618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.273626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.273885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.273895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.274105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.274113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.274412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.274419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.274487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.274494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.274772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.274780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.275090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.275098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.275410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.275417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.275612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.275620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.275724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.275732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.276030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.276039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.276173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.276181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.276388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.276396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.276697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.161 [2024-11-06 13:54:01.276706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.161 qpair failed and we were unable to recover it. 00:29:38.161 [2024-11-06 13:54:01.276808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.276816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.277135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.277143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.277460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.277469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.277667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.277676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.277965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.277973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.278177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.278185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.278525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.278533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.278869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.278878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.279180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.279188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.279473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.279480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.279781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.279789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.279988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.279996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.280175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.280184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.280389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.280398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.280688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.280698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.280995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.281003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.281213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.281221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.281497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.281505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.281838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.281847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.282044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.282051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.282319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.282327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.282484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.282493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.282775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.282782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.283042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.283049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.283364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.283372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.283693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.283702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.283990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.283999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.284301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.284310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.284644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.284652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.284981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.284989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.285274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.285282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.285580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.285589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.285905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.285913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.286224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.286232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.286522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.286530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.286864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.286873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.287170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.287179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.162 [2024-11-06 13:54:01.287489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.162 [2024-11-06 13:54:01.287497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.162 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.287792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.287800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.288128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.288137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.288314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.288322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.288655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.288663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.288952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.288960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.289273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.289281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.289585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.289594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.289918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.289926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.290111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.290120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.290429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.290437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.290738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.290757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.291083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.291091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.291458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.291466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.291772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.291781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.292098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.292105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.292371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.292379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.292671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.292682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.292953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.292962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.293269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.293277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.293576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.293583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.293881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.293889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.294198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.294206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.294505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.294513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.294813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.294821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.295105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.295113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.295416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.295424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.295739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.295750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.296026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.296035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.296368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.296377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.296650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.296658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.296963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.296972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.297252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.297260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.297542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.297550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.297850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.297859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.298188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.298196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.298495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.298503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.163 [2024-11-06 13:54:01.298792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.163 [2024-11-06 13:54:01.298801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.163 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.298964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.298972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.299283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.299291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.299622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.299630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.299955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.299964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.300285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.300294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.300600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.300608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.300918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.300926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.301221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.301229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.301575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.301584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.301893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.301902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.302179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.302187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.302474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.302482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.302763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.302772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.303094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.303102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.303414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.303422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.303723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.303731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.303942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.303951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.304261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.304269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.304573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.304581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.304868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.304879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.305071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.305080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.305246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.305253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.305586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.305595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.305918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.305926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.306245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.306253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.306551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.306560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.306827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.306835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.307157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.307166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.307465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.307473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.307772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.307782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.308087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.308095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.308407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.308415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.308767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.308775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.309091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.309099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.164 qpair failed and we were unable to recover it. 00:29:38.164 [2024-11-06 13:54:01.309410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.164 [2024-11-06 13:54:01.309418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.309710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.309718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.310018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.310027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.310330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.310338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.310644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.310653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.310998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.311008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.311213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.311222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.311531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.311539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.311895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.311903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.312109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.312117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.312391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.312399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.312711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.312721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.313003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.313011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.313326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.313334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.313641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.313649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.313967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.313975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.314281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.314289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.314633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.314642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.314965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.314974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.315150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.315159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.315455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.315463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.315761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.315770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.316059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.316067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.316387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.316395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.316599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.316607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.316892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.316901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.317215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.317223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.317532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.317540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.317851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.317859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.318163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.318172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.318480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.318489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.318800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.318809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.319122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.319130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.319420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.319428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.319734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.319742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.320016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.320024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.165 qpair failed and we were unable to recover it. 00:29:38.165 [2024-11-06 13:54:01.320328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.165 [2024-11-06 13:54:01.320337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.320667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.320675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.320993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.321002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.321306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.321315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.321632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.321640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.321834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.321843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.322001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.322011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.322302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.322311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.322579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.322587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.322776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.322785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.323073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.323081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.323386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.323394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.323699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.323707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.324023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.324032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.324347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.324355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.324653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.324661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.324996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.325004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.325284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.325292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.325600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.325609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.325911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.325920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.326124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.326133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.326438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.326446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.326754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.326762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.327047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.327055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.327363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.327371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.327697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.327705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.327991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.327999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.328347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.328356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.328657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.328665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.328956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.328967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.329269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.329278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.329582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.329590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.329896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.329904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.330203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.330211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.330520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.330529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.330810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.330818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.331026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.331035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.166 [2024-11-06 13:54:01.331225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.166 [2024-11-06 13:54:01.331232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.166 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.331497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.331505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.331809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.331817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.332124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.332132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.332439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.332448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.332755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.332763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.333059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.333067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.333386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.333395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.333717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.333726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.333930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.333938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.334255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.334262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.334564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.334572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.334880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.334888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.335205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.335213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.335521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.335529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.335811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.335819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.336137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.336145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.336452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.336460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.336773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.336781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.337088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.337097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.337402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.337409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.337716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.337723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.337920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.337929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.338215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.338223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.338544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.338553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.338889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.338897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.339216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.339224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.339535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.339543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.339857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.339865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.340171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.340180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.340482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.340491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.340806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.340814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.341177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.341187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.341494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.341502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.341808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.341816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.342134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.342141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.342427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.342435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.342748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.342757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.167 qpair failed and we were unable to recover it. 00:29:38.167 [2024-11-06 13:54:01.343054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.167 [2024-11-06 13:54:01.343063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.343370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.343378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.343663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.343671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.343975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.343983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.344168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.344177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.344480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.344488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.344802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.344810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.345118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.345126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.345429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.345437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.345734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.345742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.346032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.346040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.346350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.346357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.346660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.346668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.346953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.346962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.347254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.347263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.347567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.347575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.347882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.347890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.348200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.348208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.348493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.348500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.348802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.348810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.349119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.349127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.349434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.349442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.349731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.349739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.350015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.350023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.350326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.350334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.350640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.350648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.350944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.350952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.351266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.351273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.351584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.351592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.351903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.351911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.352224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.352232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.352533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.352542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.352845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.352853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.353153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.353162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.353448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.353458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.353763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.353771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.354077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.354085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.354389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.168 [2024-11-06 13:54:01.354397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.168 qpair failed and we were unable to recover it. 00:29:38.168 [2024-11-06 13:54:01.354684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.354691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.354993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.355001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.355305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.355313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.355619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.355626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.355936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.355944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.356253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.356261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.356632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.356641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.356931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.356940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.357238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.357245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.357547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.357555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.357861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.357869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.358176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.358184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.358376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.358383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.358562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.358570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.358857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.358865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.359170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.359177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.359521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.359529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.359856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.359865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.360173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.360182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.360485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.360493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.360818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.360826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.361131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.361139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.361449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.361456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.361765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.361773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.362060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.362068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.362375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.362383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.362690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.362699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.363001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.363009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.363347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.363355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.363661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.363668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.363973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.363981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.364287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.364294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.364582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.364590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.364893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.364902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.169 [2024-11-06 13:54:01.365232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.169 [2024-11-06 13:54:01.365239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.169 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.365545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.365552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.365842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.365851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.366163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.366171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.366469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.366477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.366778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.366786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.366984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.366992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.367309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.367317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.367622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.367630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.367920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.367928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.368223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.368230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.368536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.368543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.368817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.368826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.369219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.369226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.369509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.369517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.369822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.369831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.370137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.370145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.370497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.370505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.370800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.370808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.371125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.371133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.371449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.371457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.371612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.371620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.371808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.371816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.372085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.372093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.372407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.372416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.372729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.372736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.373047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.373055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.373369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.373377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.373697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.373705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.374021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.374029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.374423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.374430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.374716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.374725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.375045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.375053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.375429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.375438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.375737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.375748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.376033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.376041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.376353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.376361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.376672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.376680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.170 qpair failed and we were unable to recover it. 00:29:38.170 [2024-11-06 13:54:01.376873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.170 [2024-11-06 13:54:01.376881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.377211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.377219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.377525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.377533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.377828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.377836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.378155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.378165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.378484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.378491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.378806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.378815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.379128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.379136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.379421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.379429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.379737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.379748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.380107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.380115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.380423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.380431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.380718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.380726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.381016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.381024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.381365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.381373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.381650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.381659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.381846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.381855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.382123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.382130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.382422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.382430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.382750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.382759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.383046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.383054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.383244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.383253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.383455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.383463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.383734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.383742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.384070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.384078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.384384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.384392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.384701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.384709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.385008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.385016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.385385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.385393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.385690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.385697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.386012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.386021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.386328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.386336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.386632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.386640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.386837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.386845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.387029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.387038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.387339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.387347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.387642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.387651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.171 qpair failed and we were unable to recover it. 00:29:38.171 [2024-11-06 13:54:01.387962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.171 [2024-11-06 13:54:01.387970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.388122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.388131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.388434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.388442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.388768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.388776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.389081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.389089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.389285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.389293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.389476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.389483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.389851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.389862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.390160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.390168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.390473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.390482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.390791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.390799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.391059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.391067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.391387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.391394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.391702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.391710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.392018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.392026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.392334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.392342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.392523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.392532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.392838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.392846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.393152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.393160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.393464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.393473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.393777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.393786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.394103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.394111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.394408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.394416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.394699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.394707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.394986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.394994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.395301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.395309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.395614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.395621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.395919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.395927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.396097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.396107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.396433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.396441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.396749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.396758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.397061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.397069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.397410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.397418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.397724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.397731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.398008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.398017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.398303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.398311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.398616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.398625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.398913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.172 [2024-11-06 13:54:01.398922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.172 qpair failed and we were unable to recover it. 00:29:38.172 [2024-11-06 13:54:01.399241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.399248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.399443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.399451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.399711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.399719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.400016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.400024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.400326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.400334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.400619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.400627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.400909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.400918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.401223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.401232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.401535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.401543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.401726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.401735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.402019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.402028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.402333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.402340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.402647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.402655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.402926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.402934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.403251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.403259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.403550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.403559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.403853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.403861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.404143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.404151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.404451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.404459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.404758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.404767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.405036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.405044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.405354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.405362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.405670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.405679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.405986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.405995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.406311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.406319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.406642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.406650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.406928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.406937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.407249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.407257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.407458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.407466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.407769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.407778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.408096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.408103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.408407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.408415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.408712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.408720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.409021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.409029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.409334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.409342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.409646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.409654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.409960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.409970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.410256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.410264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.173 qpair failed and we were unable to recover it. 00:29:38.173 [2024-11-06 13:54:01.410573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.173 [2024-11-06 13:54:01.410582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.410865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.410874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.411188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.411196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.411487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.411495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.411801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.411810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.412163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.412171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.412464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.412472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.412702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.412710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.412986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.412994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.413300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.413309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.413619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.413627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.413927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.413936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.414242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.414251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.414428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.414436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.414726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.414734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.415057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.415066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.415371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.415379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.415686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.415694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.416009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.416018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.416304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.416311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.416593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.416600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.416899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.416907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.417215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.417223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.417529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.417536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.417840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.417848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.418155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.418163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.418470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.418479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.418765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.418774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.419099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.419107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.419464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.419472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.419774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.419783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.420117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.420125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.420428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.420436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.174 qpair failed and we were unable to recover it. 00:29:38.174 [2024-11-06 13:54:01.420742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.174 [2024-11-06 13:54:01.420756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.421025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.421033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.421220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.421229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.421535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.421543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.421836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.421844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.422148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.422157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.422443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.422451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.422754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.422762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.423078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.423087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.423393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.423402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.423682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.423690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.423995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.424004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.424196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.424204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.424488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.424496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.424815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.424823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.425125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.425133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.425441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.425449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.425754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.425762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.426066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.426075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.426383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.426392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.426699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.426708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.426869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.426878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.427103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.427111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.427426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.427434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.427737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.427749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.428047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.428055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.428340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.428348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.428651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.428659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.429054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.429063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.429354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.429362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.429651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.429658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.429964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.429972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.430271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.430280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.430477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.430485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.430788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.430796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.431141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.431148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.431456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.175 [2024-11-06 13:54:01.431464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.175 qpair failed and we were unable to recover it. 00:29:38.175 [2024-11-06 13:54:01.431661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.431670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.432031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.432039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.432342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.432350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.432656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.432664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.432862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.432871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.433149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.433156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.433456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.433464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.433775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.433784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.434087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.434096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.434380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.434388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.434694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.434702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.434999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.435007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.435309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.435316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.435520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.435527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.435848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.435856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.436164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.436171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.436445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.436453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.436774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.436783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.437099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.437106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.437404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.437411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.437725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.437733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.437892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.437900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.438122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.438130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.438455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.438463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.438762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.438770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.438967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.438976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.439290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.439298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.439601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.439609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.439918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.439927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.440248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.440257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.440455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.440463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.440650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.440659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.440861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.440869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.441144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.441152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.441459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.441466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.441775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.441783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.442119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.442127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.176 [2024-11-06 13:54:01.442412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.176 [2024-11-06 13:54:01.442419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.176 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.442601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.442609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.442977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.442985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.443288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.443297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.443650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.443658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.443965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.443972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.444279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.444286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.444487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.444495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.444772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.444780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.445099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.445107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.445411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.445419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.445723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.445733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.446020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.446028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.446333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.446342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.446637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.446646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.446973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.446981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.447276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.447283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.447589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.447597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.447915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.447923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.448245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.448252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.448579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.448588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.448893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.448902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.449080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.449088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.449376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.449384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.449687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.449695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.449994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.450002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.450309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.450317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.450620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.450629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.450929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.450937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.451252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.451260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.451564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.451572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.451879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.451887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.452256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.452263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.452559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.452567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.452874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.452883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.453205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.453213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.453500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.453507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.453817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.177 [2024-11-06 13:54:01.453825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.177 qpair failed and we were unable to recover it. 00:29:38.177 [2024-11-06 13:54:01.454095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.454103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.454410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.454417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.454713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.454721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.455016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.455024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.455351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.455359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.455664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.455672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.456018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.456026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.456329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.456337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.456617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.456625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.456948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.456957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.457247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.457255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.457602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.457610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.457909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.457918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.458221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.458230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.458574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.458582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.458875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.458883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.459187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.459195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.459537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.459545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.459844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.459853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.460159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.460167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.460495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.460503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.460808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.460817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.461102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.461110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.461414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.461421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.461729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.461737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.462055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.462063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.462382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.462389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.462585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.462594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.462909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.462917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.463225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.463233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.463567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.463575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.463876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.463885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.464183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.178 [2024-11-06 13:54:01.464192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.178 qpair failed and we were unable to recover it. 00:29:38.178 [2024-11-06 13:54:01.464499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.464506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.464797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.464805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.465116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.465124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.465426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.465434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.465741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.465752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.466069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.466077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.466382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.466391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.466699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.466708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.467010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.467019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.467304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.467311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.467587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.467595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.467905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.467913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.468250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.468257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.468542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.468550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.468855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.468863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.469178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.469186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.469491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.469499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.469793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.469801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.470103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.470112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.470409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.470417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.470695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.470703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.471023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.471031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.471338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.471346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.471649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.471657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.471967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.471975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.472302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.472310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.472636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.472643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.472972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.472981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.473321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.473329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.473603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.473611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.473910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.473918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.474158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.474165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.474490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.474497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.474796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.474804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.475083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.475091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.475397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.475405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.475714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.179 [2024-11-06 13:54:01.475722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.179 qpair failed and we were unable to recover it. 00:29:38.179 [2024-11-06 13:54:01.476037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.476046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.476387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.476396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.476697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.476706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.477009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.477018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.477303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.477311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.477614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.477621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.477918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.477926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.478243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.478250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.478543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.478551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.478862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.478870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.479135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.479143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.479463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.479471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.479758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.479767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.480113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.480121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.480425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.480432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.480755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.480764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.481062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.481070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.481375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.481383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.481687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.481695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.482008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.482016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.482344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.482351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.482553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.482562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.482883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.482891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.483215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.483224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.483554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.483562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.483886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.483894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.484194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.484202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.484496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.484504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.484694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.484704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.485016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.485024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.485327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.485335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.485524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.485532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.485845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.485854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.486169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.486176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.486481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.486490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.486831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.486840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.487148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.487156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.487466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.180 [2024-11-06 13:54:01.487473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.180 qpair failed and we were unable to recover it. 00:29:38.180 [2024-11-06 13:54:01.487780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.487788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.488107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.488115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.488404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.488413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.488720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.488728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.489010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.489020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.489328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.489336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.489621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.489630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.489918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.489926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.490247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.490255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.490560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.490569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.490872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.490881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.491187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.491195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.491515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.491524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.491853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.491861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.492194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.492202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.492524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.492532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.492841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.492849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.493170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.493178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.493464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.493472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.493781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.493790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.494102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.494110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.494404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.494412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.494720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.494728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.495004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.495012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.495321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.495330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.495636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.495646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.495956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.495965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.496271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.496279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.496579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.496588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.496762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.496771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.497071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.497078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.497364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.497372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.497678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.497686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.497987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.497995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.498283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.498291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.498593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.498602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.498905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.181 [2024-11-06 13:54:01.498914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.181 qpair failed and we were unable to recover it. 00:29:38.181 [2024-11-06 13:54:01.499221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.499229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.499520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.499528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.499830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.499839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.500151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.500160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.500465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.500474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.500799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.500807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.501121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.501129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.501436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.501444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.501756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.501765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.502075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.502083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.502455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.502463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.502764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.502772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.503086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.503094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.503418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.503427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.503733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.503743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.504074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.504083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.504386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.504394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.504716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.504724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.505009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.505017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.505370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.505378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.505681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.505689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.506002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.506010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.506314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.506322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.506629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.506637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.506920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.506929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.507230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.507239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.507544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.507552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.507856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.507865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.508174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.508184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.508512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.508520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.508842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.508850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.509156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.509164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.509468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.509476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.509768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.509777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.510091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.510099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.510407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.510415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.510715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.510723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.511007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.182 [2024-11-06 13:54:01.511015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.182 qpair failed and we were unable to recover it. 00:29:38.182 [2024-11-06 13:54:01.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.183 [2024-11-06 13:54:01.511316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.183 qpair failed and we were unable to recover it. 00:29:38.183 [2024-11-06 13:54:01.511638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.183 [2024-11-06 13:54:01.511646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.183 qpair failed and we were unable to recover it. 00:29:38.183 [2024-11-06 13:54:01.511971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.183 [2024-11-06 13:54:01.511979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.183 qpair failed and we were unable to recover it. 00:29:38.183 [2024-11-06 13:54:01.512306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.183 [2024-11-06 13:54:01.512314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.183 qpair failed and we were unable to recover it. 00:29:38.183 [2024-11-06 13:54:01.512621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.183 [2024-11-06 13:54:01.512629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.183 qpair failed and we were unable to recover it. 00:29:38.183 [2024-11-06 13:54:01.512924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.183 [2024-11-06 13:54:01.512933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.183 qpair failed and we were unable to recover it. 00:29:38.183 [2024-11-06 13:54:01.513239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.183 [2024-11-06 13:54:01.513247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.183 qpair failed and we were unable to recover it. 00:29:38.183 [2024-11-06 13:54:01.513579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.183 [2024-11-06 13:54:01.513587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.183 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.513895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.513905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.514103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.514112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.514384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.514393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.514722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.514730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.515004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.515012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.515320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.515327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.515633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.515641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.515860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.515870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.516146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.516154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.516450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.516458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.516756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.516764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.517091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.517099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.517405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.517414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.517710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.517718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.518072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.518081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.518406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.518414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.518719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-11-06 13:54:01.518728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.458 qpair failed and we were unable to recover it. 00:29:38.458 [2024-11-06 13:54:01.519024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.519033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.519337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.519345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.519633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.519642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.519854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.519862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.520216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.520224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.520514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.520524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.520811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.520819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.521139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.521147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.521443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.521451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.521758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.521766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.521920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.521929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.522196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.522204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.522394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.522403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.522707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.522714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.523014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.523022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.523339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.523347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.523475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.523482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.523556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.523565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.523876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.523884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.524193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.524201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.524506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.524515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.524822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.524831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.525142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.525149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.525445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.525453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.525725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.525733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.526055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.526063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.526368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.526377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.526675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.526682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.527004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.527012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.527315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.527324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.527617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.527626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.527966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.527974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.528282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.528291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.528596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.528605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.528889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.528898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.529217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.529226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.529528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.529536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.529845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.459 [2024-11-06 13:54:01.529853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.459 qpair failed and we were unable to recover it. 00:29:38.459 [2024-11-06 13:54:01.530163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.530172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.530477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.530486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.530762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.530771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.531081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.531089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.531367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.531376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.531677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.531685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.531992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.532000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.532307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.532317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.532603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.532611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.532909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.532917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.533236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.533245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.533555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.533563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.533871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.533880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.534193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.534201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.534509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.534517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.534761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.534770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.535064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.535072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.535422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.535430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.535597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.535606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.535981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.535989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.536274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.536281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.536593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.536601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.536916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.536925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.537228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.537237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.537543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.537552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.537857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.537865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.538153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.538161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.538470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.538478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.538751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.538759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.538956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.538963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.539277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.539284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.539597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.539605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.539894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.539902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.540217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.540226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.540572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.540581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.540883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.540892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.541044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.541061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.541395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.541403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.460 qpair failed and we were unable to recover it. 00:29:38.460 [2024-11-06 13:54:01.541712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.460 [2024-11-06 13:54:01.541721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.541996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.542003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.542293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.542300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.542603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.542611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.542919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.542927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.543241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.543250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.543530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.543538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.543897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.544199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.544206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.544472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.544482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.544789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.544798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.545029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.545037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.545337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.545345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.545648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.545657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.545963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.545972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.546276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.546284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.546593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.546601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.546942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.546950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.547243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.547251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.547557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.547566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.547894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.547902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.548205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.548213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.548505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.548513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.548818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.548827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.549126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.549133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.549437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.549445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.549738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.549749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.550068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.550075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.550379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.550387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.550697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.550706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.551037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.551045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.551346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.551355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.551656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.551664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.551970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.551978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.552272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.552280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.552586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.552595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.552896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.552905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.553226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.553236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.461 [2024-11-06 13:54:01.553578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.461 [2024-11-06 13:54:01.553587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.461 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.553892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.553900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.554205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.554213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.554526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.554534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.554862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.554870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.555194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.555203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.555505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.555513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.555816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.555825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.556143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.556151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.556456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.556463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.556766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.556774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.557083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.557091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.557380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.557388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.557695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.557703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.558010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.558019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.558321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.558329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.558652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.558660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.558966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.558974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.559276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.559284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.559589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.559596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.559931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.559939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.560253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.560261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.560576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.560584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.560889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.560898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.561254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.561263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.561569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.561577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.561881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.561889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.562238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.562246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.562571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.562579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.562886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.562893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.563232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.563240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.563558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.563565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.563854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.563863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.564228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.564236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.564414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.564423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.564749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.564757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.565030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.565038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-11-06 13:54:01.565317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.462 [2024-11-06 13:54:01.565325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.565634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.565644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.565919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.565927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.566226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.566234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.566508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.566516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.566810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.566819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.567123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.567132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.567418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.567425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.567623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.567630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.567891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.567899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.568259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.568267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.568556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.568564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.568872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.568880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.569191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.569199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.569506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.569513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.569706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.569715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.570011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.570019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.570326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.570335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.570638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.570648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.570950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.570959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.571268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.571276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.571586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.571593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.571975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.571983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.572274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.572282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.572588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.572596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.572903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.572911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.573211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.573220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.573503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.573512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.573815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.573824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.574213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.574221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.574516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.574524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.574847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.574855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.575162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.575171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.575476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.575485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-11-06 13:54:01.575678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.463 [2024-11-06 13:54:01.575686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.575971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.575980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.576284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.576292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.576597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.576606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.576913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.576921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.577096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.577105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.577429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.577438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.577752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.577763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.578068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.578076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.578237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.578246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.578439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.578447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.578773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.578781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.579079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.579087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.579389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.579396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.579436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.579443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.579781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.579789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.580002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.580011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.580325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.580334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.580504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.580513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.580708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.580716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.580983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.580991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.581299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.581308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.581631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.581640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.581964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.581973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.582270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.582278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.582461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.582470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.582781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.582789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.583131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.583139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.583447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.583455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.583762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.583770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.584043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.584050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.584354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.584363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.584667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.584676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.584982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.584990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.585276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.585284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.585588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.585595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.585909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.585918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.586236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.586244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.586572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.464 [2024-11-06 13:54:01.586581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-11-06 13:54:01.586888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.586897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.587192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.587200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.587504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.587513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.587856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.587864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.588169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.588177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.588485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.588493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.588800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.588808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.589117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.589126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.589434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.589444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.589752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.589760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.590098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.590107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.590419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.590427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.590775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.590783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.591058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.591066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.591373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.591380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.591674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.591682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.591987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.591995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.592306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.592315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.592592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.592601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.592942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.592951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.593269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.593277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.593582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.593590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.593893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.593901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.594244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.594252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.594559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.594568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.594878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.594886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.595199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.595207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.595495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.595504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.595813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.595822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.596131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.596139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.596445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.596453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.596739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.596752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.597023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.597031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.597340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.597349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.597652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.597661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.597976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.597984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.598292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.598300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.598605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.598613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.465 qpair failed and we were unable to recover it. 00:29:38.465 [2024-11-06 13:54:01.598910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.465 [2024-11-06 13:54:01.598918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.599219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.599228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.599531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.599540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.599827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.599835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.600143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.600152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.600437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.600446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.600719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.600728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.601031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.601039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.601343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.601351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.601700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.601708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.602011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.602021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.602324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.602332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.602638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.602646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.602972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.602980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.603285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.603294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.603475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.603484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.603780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.603788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.604102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.604109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.604416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.604424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.604725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.604733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.605039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.605047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.605370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.605378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.605682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.605690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.605995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.606003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.606314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.606322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.606610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.606619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.606807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.606816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.607080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.607089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.607399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.607407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.607597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.607604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.607919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.607927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.608247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.608255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.608557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.608565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.608852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.608860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.609166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.609174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.609478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.609486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.609792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.609800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.610111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.610120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.466 [2024-11-06 13:54:01.610425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.466 [2024-11-06 13:54:01.610434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.466 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.610728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.610737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.611063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.611072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.611356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.611364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.611680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.611688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.611866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.611875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.612160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.612168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.612459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.612467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.612777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.612785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.613092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.613099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.613413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.613421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.613704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.613712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.614008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.614018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.614327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.614334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.614639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.614647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.614945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.614954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.615261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.615269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.615575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.615583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.615890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.615898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.616183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.616191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.616499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.616507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.616815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.616823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.617129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.617136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.617498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.617506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.617801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.617810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.617992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.618001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.618333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.618341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.618666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.618673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.618936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.618944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.619248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.619255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.619563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.619570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.619859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.619867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.620172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.620179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.620484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.620493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.620777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.467 [2024-11-06 13:54:01.620786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.467 qpair failed and we were unable to recover it. 00:29:38.467 [2024-11-06 13:54:01.620962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.620970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.621145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.621154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.621469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.621477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.621787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.621795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.622082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.622090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.622470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.622478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.622774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.622782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.623087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.623094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.623382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.623390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.623680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.623687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.623996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.624005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.624310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.624319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.624603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.624611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.624915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.624923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.625228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.625236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.625545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.625553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.625841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.625849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.626157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.626168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.626469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.626477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.626782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.626790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.627109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.627117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.627422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.627431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.627743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.627756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.628034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.628043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.628335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.628343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.628648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.628656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.628930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.628938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.629245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.629253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.629549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.629557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.629859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.629867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.630199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.630207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.630512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.630520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.630810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.630818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.631129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.631137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.631446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.631454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.631760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.631768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.632065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.632073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.632394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.632402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.468 [2024-11-06 13:54:01.632752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.468 [2024-11-06 13:54:01.632760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.468 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.632948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.632958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.633271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.633279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.633582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.633590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.633898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.633906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.634212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.634220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.634503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.634511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.634810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.634818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.635137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.635145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.635423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.635430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.635756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.635764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.636076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.636085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.636390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.636398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.636703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.636710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.637003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.637011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.637311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.637319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.637626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.637634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.637836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.637845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.638165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.638173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.638497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.638506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.638817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.638824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.639129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.639137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.639431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.639439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.639741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.639751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.639947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.639954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.640272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.640279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.640445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.640453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.640730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.640738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.641010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.641019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.641322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.641331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.641634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.641642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.641962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.641970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.642271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.642278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.642583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.642591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.642896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.642904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.643079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.643086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.643245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.643253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.643512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.643520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.643839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.469 [2024-11-06 13:54:01.643848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.469 qpair failed and we were unable to recover it. 00:29:38.469 [2024-11-06 13:54:01.644154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.644161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.644508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.644515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.644807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.644815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.645125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.645133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.645407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.645414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.645737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.645745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.646068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.646076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.646447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.646456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.646621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.646631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.646969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.646978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.647278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.647286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.647583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.647590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.647885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.647893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.648201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.648209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.648519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.648527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.648679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.648687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.649004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.649011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.649315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.649323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.649490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.649498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.649842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.649849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.650158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.650168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.650472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.650480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.650788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.650806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.651147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.651155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.651460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.651467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.651778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.651787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.652043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.652051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.652382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.652389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.652695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.652703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.653007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.653015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.653324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.653331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.653622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.653630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.653914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.653922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.654228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.654236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.654585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.654593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.654910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.654920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.655234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.655243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.655543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.655551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.470 qpair failed and we were unable to recover it. 00:29:38.470 [2024-11-06 13:54:01.655858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.470 [2024-11-06 13:54:01.655866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.656224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.656232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.656526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.656534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.656810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.656818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.657125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.657133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.657419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.657427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.657753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.657762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.658045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.658053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.658361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.658370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.658670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.658678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.659017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.659026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.659332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.659340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.659650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.659659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.659935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.659943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.660261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.660269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.660574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.660583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.660894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.660904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.661207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.661216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.661519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.661528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.661831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.661840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.662145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.662154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.662459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.662468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.662770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.662781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.663100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.663108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.663410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.663419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.663714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.663723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.664027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.664036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.664346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.664354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.664660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.664667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.664856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.664866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.665176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.665184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.665465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.665473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.665778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.665786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.666114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.666122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.666428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.666436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.666749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.666758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.667072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.667081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.667374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.471 [2024-11-06 13:54:01.667383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.471 qpair failed and we were unable to recover it. 00:29:38.471 [2024-11-06 13:54:01.667686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.667695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.668006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.668016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.668321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.668330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.668636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.668644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.668954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.668962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.669284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.669291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.669591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.669598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.669893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.669902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.670196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.670204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.670509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.670517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.670827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.670836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.671150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.671159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.671463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.671471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.671787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.671796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.672124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.672133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.672423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.672431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.672736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.672744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.673043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.673051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.673354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.673362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.673645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.673654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.673959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.673968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.674279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.674288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.674595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.674604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.674899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.674906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.675218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.675228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.675404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.675412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.675755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.675763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.676086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.676094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.676397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.676404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.676715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.676723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.676918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.676928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.677231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.677239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.677552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.677560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.677868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.472 [2024-11-06 13:54:01.677876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.472 qpair failed and we were unable to recover it. 00:29:38.472 [2024-11-06 13:54:01.678222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.678231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.678518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.678527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.678703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.678711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.678984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.678993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.679177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.679187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.679516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.679524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.679834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.679843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.680200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.680208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.680512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.680520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.680803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.680811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.681128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.681136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.681430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.681437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.681742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.681753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.682036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.682045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.682350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.682359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.682663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.682672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.682983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.682992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.683283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.683291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.683595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.683603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.683913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.683921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.684237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.684245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.684530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.684538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.684819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.684827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.685147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.685155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.685457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.685466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.685757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.685766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.686070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.686078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.686393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.686401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.686596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.686604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.686833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.686842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.687151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.687161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.687323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.687332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.687613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.687621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.687790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.687800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.688077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.688084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.688392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.688399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.688714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.688722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.688944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.688953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.473 qpair failed and we were unable to recover it. 00:29:38.473 [2024-11-06 13:54:01.689240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.473 [2024-11-06 13:54:01.689248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.689570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.689579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.689918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.689927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.690228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.690236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.690541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.690549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.690853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.690861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.691170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.691179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.691512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.691520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.691827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.691835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.692102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.692111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.692430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.692438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.692726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.692735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.692910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.692918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.693134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.693143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.693352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.693360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.693649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.693657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.693974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.693982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.694259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.694267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.694576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.694584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.694759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.694769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.695108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.695117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.695317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.695325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.695600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.695607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.695786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.695796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.696115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.696123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.696430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.696438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.696744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.696756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.697059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.697067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.697378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.697386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.697678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.697687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.698042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.698051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.698350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.698358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.698737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.698750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.699041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.699050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.699376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.699384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.699669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.699677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.699986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.699994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.700303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.700311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.700618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.474 [2024-11-06 13:54:01.700626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.474 qpair failed and we were unable to recover it. 00:29:38.474 [2024-11-06 13:54:01.700811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.700821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.701136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.701144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.701446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.701454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.701762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.701770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.702072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.702080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.702385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.702394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.702696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.702705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.703022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.703030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.703320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.703328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.703637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.703645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.703978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.703986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.704291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.704299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.704601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.704608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.704911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.704919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.705278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.705286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.705584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.705592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.705883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.705891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.706220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.706227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.706389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.706398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.706593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.706600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.706967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.706975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.707230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.707237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.707541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.707549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.707856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.707865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.708151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.708160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.708469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.708478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.708786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.708794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.709100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.709108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.709285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.709294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.709618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.709626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.709920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.709929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.710246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.710254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.710539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.710546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.710845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.710853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.711172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.711180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.711488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.711495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.711823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.711831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.475 [2024-11-06 13:54:01.712115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.475 [2024-11-06 13:54:01.712124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.475 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.712430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.712439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.712740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.712757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.712991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.712999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.713196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.713203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.713389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.713397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.713677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.713685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.713988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.713996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.714305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.714313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.714657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.714664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.714973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.714982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.715267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.715274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.715579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.715587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.715892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.715900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.716225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.716233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.716558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.716567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.716877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.716885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.717205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.717212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.717519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.717526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.717813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.717821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.718025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.718033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.718334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.718342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.718648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.718655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.718984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.718994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.719303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.719310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.719621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.719628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.719922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.719930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.720145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.720153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.720405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.720413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.720726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.720735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.721099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.721108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.721275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.721284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.721581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.721588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.721897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.476 [2024-11-06 13:54:01.721905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.476 qpair failed and we were unable to recover it. 00:29:38.476 [2024-11-06 13:54:01.722105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.722113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.722242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.722250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.722562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.722570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.722877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.722885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.723224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.723232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.723405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.723414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.723702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.723710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.724018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.724026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.724337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.724345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.724628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.724635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.724923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.724931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.725241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.725249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.725560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.725568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.725862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.725870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.726045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.726054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.726394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.726402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.726670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.726678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.726859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.726868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.727235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.727243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.727551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.727559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.727757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.727766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.728069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.728077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.728393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.728401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.728529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.728536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.728812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.728819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.728999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.729007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.729201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.729209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.729488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.729495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.729676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.729684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.729975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.729985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.730289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.730297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.730645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.730652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.730977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.730985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.731262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.731270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.731430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.731440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.731811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.731819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.732115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.732123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.732419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.477 [2024-11-06 13:54:01.732427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.477 qpair failed and we were unable to recover it. 00:29:38.477 [2024-11-06 13:54:01.732751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.732760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.733059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.733068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.733382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.733389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.733723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.733731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.734023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.734031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.734302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.734310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.734614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.734622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.734906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.734914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.735220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.735228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.735535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.735543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.735846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.735855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.736148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.736157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.736463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.736472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.736782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.736790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.736976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.736984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.737263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.737271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.737577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.737584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.737938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.737946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.738290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.738298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.738583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.738591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.738902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.738909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.739247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.739254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.739558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.739566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.739865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.739874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.740175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.740183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.740488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.740496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.740795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.740803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.741129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.741137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.741422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.741430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.741742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.741753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.741939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.741948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.742244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.742253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.742558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.742566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.742871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.742879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.743184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.743192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.743481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.743489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.743794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.743802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.744113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.478 [2024-11-06 13:54:01.744121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.478 qpair failed and we were unable to recover it. 00:29:38.478 [2024-11-06 13:54:01.744424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.744432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.744720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.744728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.745041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.745050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.745366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.745374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.745678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.745687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.745979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.745987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.746296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.746304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.746617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.746625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.746962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.746970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.747259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.747267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.747576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.747584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.747890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.747898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.748204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.748213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.748493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.748501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.748811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.748820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.749138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.749145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.749448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.749456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.749819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.749826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.750132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.750140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.750437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.750444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.750752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.750760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.751071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.751079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.751389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.751397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.751674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.751682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.751987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.751996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.752320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.752327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.752633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.752641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.752911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.752919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.753282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.753289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.753625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.753633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.753963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.753971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.754263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.754271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.754573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.754581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.754899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.754910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.755217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.755224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.755533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.755540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.755844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.755853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.756142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.756149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.479 [2024-11-06 13:54:01.756345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.479 [2024-11-06 13:54:01.756352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.479 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.756672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.756679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.756991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.756999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.757335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.757343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.757651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.757660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.757966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.757975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.758276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.758284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.758573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.758581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.758888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.758896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.759227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.759235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.759585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.759593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.759884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.759892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.760202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.760209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.760516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.760524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.760810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.760819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.761105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.761113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.761420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.761428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.761739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.761753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.762074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.762082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.762398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.762405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.762667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.762675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.762981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.762990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.763292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.763301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.763604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.763612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.763919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.763927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.764230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.764239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.764545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.764552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.764845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.764853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.765160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.765168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.765336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.765346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.765676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.765684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.765981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.765989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.766294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.766301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.766614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.766622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.766924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.766932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.767246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.767256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.767564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.767571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.767877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.767885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.768190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.480 [2024-11-06 13:54:01.768198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.480 qpair failed and we were unable to recover it. 00:29:38.480 [2024-11-06 13:54:01.768491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.768499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.768804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.768812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.769135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.769143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.769450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.769458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.769743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.769757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.770034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.770042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.770350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.770358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.770670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.770679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.770909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.770917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.771228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.771236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.771547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.771555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.771861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.771869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.772174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.772182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.772759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.772780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.773092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.773101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.773390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.773398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.773679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.773687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.773957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.773965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.774292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.774300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.774605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.774613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.774935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.774945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.775253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.775261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.775571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.775579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.775799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.775808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.776102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.776110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.776482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.776490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.776778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.776787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.777101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.777109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.777398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.777406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.777717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.777726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.778002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.778011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.778318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.778326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.778615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.778624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.778920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.778928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.481 [2024-11-06 13:54:01.779248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.481 [2024-11-06 13:54:01.779256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.481 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.779567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.779575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.779904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.779914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.780204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.780212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.780514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.780521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.780832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.780840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.781149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.781157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.781462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.781470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.781785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.781793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.782154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.782162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.782455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.782463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.782772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.782780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.783134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.783142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.783446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.783454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.783780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.783789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.784095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.784103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.784414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.784422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.784727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.784736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.785098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.785107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.785412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.785421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.785784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.785792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.786112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.786120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.786306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.786314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.786581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.786588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.786899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.786907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.787243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.787251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.787553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.787561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.787867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.787875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.788184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.788193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.788497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.788505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.788851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.788859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.789164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.789171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.789485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.789493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.789797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.789806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.790148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.790156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.790464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.790471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.790827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.790835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.791139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.791146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.791439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.482 [2024-11-06 13:54:01.791448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.482 qpair failed and we were unable to recover it. 00:29:38.482 [2024-11-06 13:54:01.791752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.791761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.791950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.791959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.792268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.792276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.792463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.792474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.792692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.792700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.792977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.792985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.793293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.793301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.793590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.793598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.793906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.793914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.794253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.794260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.794613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.794621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.794957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.794965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.795232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.795241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.795542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.795549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.795853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.795861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.796175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.796182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.796490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.796498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.796809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.796817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.797125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.797133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.797429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.797437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.797743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.797758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.797837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.797845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.798108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.798115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.798432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.798440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.798719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.798726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.799002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.799010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.799316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.799324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.799629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.799637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.799917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.799925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.800255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.800263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.800601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.800610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.800772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.800781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.801075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.801082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.801390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.801397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.801705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.801712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.802072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.802080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.802405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.802413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.483 [2024-11-06 13:54:01.802724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.483 [2024-11-06 13:54:01.802732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.483 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.803093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.803101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.803396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.803404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.803582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.803589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.803787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.803795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.804073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.804081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.804399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.804408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.804744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.804755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.805057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.805064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.805371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.805379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.805689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.805697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.806063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.806071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.806261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.806269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.806571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.806579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.806889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.806898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.807186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.807194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.807466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.807474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.807799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.807806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.808164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.808172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.808462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.808470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.808779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.808787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.809112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.809120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.809442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.809449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.809738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.809749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.810023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.810031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.810381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.810389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.810580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.810587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.810884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.810891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.811274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.811282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.811453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.811461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.811675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.811683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.811740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.811752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.812002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.812010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.812194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.812202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.812486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.812494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.812790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.812798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.813119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.813127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.813429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.813437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.813744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.484 [2024-11-06 13:54:01.813763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.484 qpair failed and we were unable to recover it. 00:29:38.484 [2024-11-06 13:54:01.814090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.814099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.814391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.814400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.814671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.814680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.814845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.814854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.815050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.815058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.815328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.815335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.815535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.815542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.815860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.815870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.816250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.816258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.816543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.816550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.816730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.816738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.816955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.816963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.817113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.817121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.817405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.817413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.485 [2024-11-06 13:54:01.817732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.485 [2024-11-06 13:54:01.817739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.485 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.818041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.818051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.818325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.818334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.818621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.818629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.818911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.818919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.819245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.819253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.819561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.819570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.819868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.819877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.820234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.820242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.820544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.820553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.820862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.820870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.821178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.821186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.821491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.821498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.821803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.821811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.822123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.822130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.822419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.822427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.822752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.822760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.823050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.823059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.823363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.823372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.823675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.823683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.823995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.763 [2024-11-06 13:54:01.824003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.763 qpair failed and we were unable to recover it. 00:29:38.763 [2024-11-06 13:54:01.824310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.824318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.824625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.824633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.824918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.824926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.825321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.825330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.825638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.825646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.825922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.825931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.826147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.826155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.826466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.826474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.826778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.826785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.827091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.827099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.827413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.827422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.827721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.827729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.828023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.828033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.828347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.828355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.828647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.828656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.829003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.829011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.829320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.829328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.829601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.829609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.829910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.829918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.830226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.830234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.830536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.830544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.830875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.830883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.831184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.831192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.831501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.831510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.831812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.831820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.832088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.832095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.832404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.832412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.832719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.832727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.833029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.833037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.833336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.833344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.833667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.833675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.833984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.833994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.834298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.764 [2024-11-06 13:54:01.834306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.764 qpair failed and we were unable to recover it. 00:29:38.764 [2024-11-06 13:54:01.834582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.834590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.834914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.834922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.835234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.835242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.835547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.835555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.835862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.835871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.836214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.836222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.836404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.836413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.836717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.836725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.837024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.837032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.837360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.837367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.837709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.837716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.838019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.838028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.838333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.838342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.838649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.838657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.838969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.838977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.839284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.839293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.839613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.839621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.839926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.839935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.840241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.840249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.840550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.840560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.840864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.840872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.841160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.841168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.841481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.841490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.841796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.841804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.842110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.842117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.842422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.842430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.842712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.842719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.843020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.843028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.843238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.843246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.843583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.843591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.843895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.843903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.844210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.844219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.844388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.844396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.844712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.844720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.845039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.845047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.845353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.845361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.765 [2024-11-06 13:54:01.845658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.765 [2024-11-06 13:54:01.845666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.765 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.845951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.845959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.846265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.846273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.846583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.846591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.846776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.846784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.847081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.847088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.847395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.847403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.847716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.847724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.848035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.848043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.848330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.848338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.848522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.848532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.848856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.848864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.849163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.849171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.849504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.849512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.849823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.849830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.850153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.850161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.850475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.850484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.850810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.850818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.851126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.851134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.851442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.851450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.851751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.851760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.852049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.852056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.852364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.852373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.852687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.852697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.853000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.853008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.853296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.853305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.853619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.853628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.853932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.853940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.854179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.854186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.854493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.854502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.854816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.854824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.855165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.855173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.855484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.855493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.855854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.855863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.856042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.856051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.856335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.856343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.856651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.856661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.766 [2024-11-06 13:54:01.856952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.766 [2024-11-06 13:54:01.856960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.766 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.857251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.857259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.857579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.857587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.857891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.857899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.858061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.858069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.858389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.858396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.858707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.858714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.859052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.859061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.859445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.859453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.859751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.859759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.860081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.860089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.860359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.860367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.860548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.860556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.860931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.860939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.861238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.861245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.861557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.861564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.861873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.861881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.862200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.862208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.862560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.862569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.862867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.862875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.863194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.863202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.863501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.863508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.863759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.863768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.864058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.864065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.864375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.864383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.864686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.864694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.865007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.865015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.865335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.865343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.865629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.865638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.865802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.865811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.865995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.866004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.866255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.866263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.866594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.866602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.866881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.866888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.867088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.867105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.867320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.867327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.867490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.767 [2024-11-06 13:54:01.867498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.767 qpair failed and we were unable to recover it. 00:29:38.767 [2024-11-06 13:54:01.867858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.867866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.868175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.868183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.868494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.868502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.868793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.868801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.869110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.869118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.869427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.869434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.869744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.869755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.870057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.870074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.870397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.870405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.870593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.870602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.870888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.870898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.871106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.871114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.871307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.871316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.871644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.871651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.871843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.871852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.872216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.872224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.872527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.872537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.872823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.872832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.873159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.873167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.873449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.873457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.873768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.873777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.874206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.874220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.874416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.874425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.874649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.874657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.875024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.875033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.875338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.875346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.875656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.875664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.875970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.875977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.876251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.876259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.876553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.768 [2024-11-06 13:54:01.876561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.768 qpair failed and we were unable to recover it. 00:29:38.768 [2024-11-06 13:54:01.876880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.876888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.877178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.877186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.877540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.877547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.877790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.877798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.878114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.878122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.878410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.878418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.878625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.878633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.879037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.879045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.879347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.879355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.879637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.879645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.880035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.880043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.880254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.880262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.880576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.880583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.881003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.881019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.881306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.881313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.881624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.881632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.881933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.881941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.882257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.882264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.882591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.882598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.882946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.882954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.883143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.883150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.883433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.883441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.883713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.883721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.884008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.884014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.884272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.884279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.884566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.884572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.884863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.884873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.885206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.885213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.885418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.885427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.885589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.885598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.885924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.885931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.886244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.886251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-11-06 13:54:01.886564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.769 [2024-11-06 13:54:01.886571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.886769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.886777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.886949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.886957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.887227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.887235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.887548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.887556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.887743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.887754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.888078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.888086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.888398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.888405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.888714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.888721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.889096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.889104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.889400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.889408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.889718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.889725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.889944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.889952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.890245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.890252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.890560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.890568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.890824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.890832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.891141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.891148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.891433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.891441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.891758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.891766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.892085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.892093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.892408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.892416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.892753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.892761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.893091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.893099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.893399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.893407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.893719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.893727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.894038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.894046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.894231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.894239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.894561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.894569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.894881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.894889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.895230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.895238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.895538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.895545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.895861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.895870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.896174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.896182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.896459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.896467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.896778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.896789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.896987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.896995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.897297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.897304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.897633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.897641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.770 qpair failed and we were unable to recover it. 00:29:38.770 [2024-11-06 13:54:01.897958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.770 [2024-11-06 13:54:01.897967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.898274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.898281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.898582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.898589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.898893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.898901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.899085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.899093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.899357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.899365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.899676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.899684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.899964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.899972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.900271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.900278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.900543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.900550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.900882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.900890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.901235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.901242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.901540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.901548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.901879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.901887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.902198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.902206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.902484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.902492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.902803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.902810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.903117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.903123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.903418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.903424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.903744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.903755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.904035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.904043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.904344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.904351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.904576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.904584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.904847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.904855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.905168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.905175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.905457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.905464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.905744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.905755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.906049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.906057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.906379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.906386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.906668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.906675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.907049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.907056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.907354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.907361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.907696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.907704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.907983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.907991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.908319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.771 [2024-11-06 13:54:01.908326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.771 qpair failed and we were unable to recover it. 00:29:38.771 [2024-11-06 13:54:01.908609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.908616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.908912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.908927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.909246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.909254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.909535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.909543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.909855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.909862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.910185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.910193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.910474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.910482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.910790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.910800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.911084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.911092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.911377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.911384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.911693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.911700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.911911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.911925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.912259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.912265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.912551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.912559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.912887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.912894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.913214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.913221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.913517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.913524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.913810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.913817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.914105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.914111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.914417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.914424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.914724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.914731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.915056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.915064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.915377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.915384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.915706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.915712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.916020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.916027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.916318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.916324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.916611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.916619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.916911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.916919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.917217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.917225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.917531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.917538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.917810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.917817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.918139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.918146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.918532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.918539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.918846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.918853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.919094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.919101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.772 qpair failed and we were unable to recover it. 00:29:38.772 [2024-11-06 13:54:01.919421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.772 [2024-11-06 13:54:01.919431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.919753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.919761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.920072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.920079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.920242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.920252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.920568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.920575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.920861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.920868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.921181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.921190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.921513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.921521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.921811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.921818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.922144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.922150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.922482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.922489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.922799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.922806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.923009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.923016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.923365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.923372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.923550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.923557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.923883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.923890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.924195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.924203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.924397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.924404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.924722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.924729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.925118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.925125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.925408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.925414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.925697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.925704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.925914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.925921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.926233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.926239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.926547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.926554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.926875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.926882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.927096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.927103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.927413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.927420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.927733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.927740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.928028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.773 [2024-11-06 13:54:01.928035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.773 qpair failed and we were unable to recover it. 00:29:38.773 [2024-11-06 13:54:01.928342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.928348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.928631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.928637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.928957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.928964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.929266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.929273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.929582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.929590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.929900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.929907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.930217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.930224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.930513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.930520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.930727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.930734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.931086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.931094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.931404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.931411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.931695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.931701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.931988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.931995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.932322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.932328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.932693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.932700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.932995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.933001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.933292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.933301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.933611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.933618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.933913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.933920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.934220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.934227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.934535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.934543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.934887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.934894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.935185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.935192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.935488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.935495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.935802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.935809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.936125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.936131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.936382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.936390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.936680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.936686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.936999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.937006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.937213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.937220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.937540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.937547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.937855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.937862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.938159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.938165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.938450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.938456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.938771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.938778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.939176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.939183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.939480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.939487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.774 qpair failed and we were unable to recover it. 00:29:38.774 [2024-11-06 13:54:01.939795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.774 [2024-11-06 13:54:01.939802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.940143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.940150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.940441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.940447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.940716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.940723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.941004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.941011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.941327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.941333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.941623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.941629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.941844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.941852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.942195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.942202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.942507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.942514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.942808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.942815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.943138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.943144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.943486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.943492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.943811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.943817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.944120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.944128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.944456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.944463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.944650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.944657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.944984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.944991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.945276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.945283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.945589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.945597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.945908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.945914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.946221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.946228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.946429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.946436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.946590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.946598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.946877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.946885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.947236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.947244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.947533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.947541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.947851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.947858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.948047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.948055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.948390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.948396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.948690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.948696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.949017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.949024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.949196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.949203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.949591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.949598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-06 13:54:01.949801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-06 13:54:01.949808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.950127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.950134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.950443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.950450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.950771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.950778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.951070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.951077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.951391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.951397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.951633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.951640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.951968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.951976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.952263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.952270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.952570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.952577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.952884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.952891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.953096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.953103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.953391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.953398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.953593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.953600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.953938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.953945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.954229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.954236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.954399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.954407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.954766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.954774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.955058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.955065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.955365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.955372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.955574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.955582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.955900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.955907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.956116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.956122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.956451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.956457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.956625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.956633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.956908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.956917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.957263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.957269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.957587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.957593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.957908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.957915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.958241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.958248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.958534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.958540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.958848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.958856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.959149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.959156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.959461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.959469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.959774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.959781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.960055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.960062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.960349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.960355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-06 13:54:01.960662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-06 13:54:01.960669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.960974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.960981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.961292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.961298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.961596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.961602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.961934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.961941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.962232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.962238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.962548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.962556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.962875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.962882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.963190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.963197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.963506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.963513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.963820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.963827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.964117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.964123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.964370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.964376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.964701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.964707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.965031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.965038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.965357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.965364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.965680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.965688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.965985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.965992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.966298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.966304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.966516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.966523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.966689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.966696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.967016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.967023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.967323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.967330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.967518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.967532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.967924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.967931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.968216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.968223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.968528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.968536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.968856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.968864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.969132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.969141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.969444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.969451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.969741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.969751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.970122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.970128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.970438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.970444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.970763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.970770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.971084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-06 13:54:01.971091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-06 13:54:01.971294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.971308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.971639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.971647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.971954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.971961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.972288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.972296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.972586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.972594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.972893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.972900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.973207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.973214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.973529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.973537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.973831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.973838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.974152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.974158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.974473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.974480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.974787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.974794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.975099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.975105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.975417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.975424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.975732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.975739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.976024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.976032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.976333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.976340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.976648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.976655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.976964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.976971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.977279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.977286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.977847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.977865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.978189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.978197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.978504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.978511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.978699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.978706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.978999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.979006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.979375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.979381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.979666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.979673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.979996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.980003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.980208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.980214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.980547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.980554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.980843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.980850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.981158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.981164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.981356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-06 13:54:01.981363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-06 13:54:01.981729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.981739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.982044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.982052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.982365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.982373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.982674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.982681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.982991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.982998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.983294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.983301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.983609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.983616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.983857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.983865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.984233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.984239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.984557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.984564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.984889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.984896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.985219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.985226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.985538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.985544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.985864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.985871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.986149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.986156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.986458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.986464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.986779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.986786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.987075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.987082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.987407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.987415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.987704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.987711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.987819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.987826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.988022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.988029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.988313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.988320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.988609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.988616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.988914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.988921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.989227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.989234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.989541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.989549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.989856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.989865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.990189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.990197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.990505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.990513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.990853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.990860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.991186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.991192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.991519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.991526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.991896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-06 13:54:01.991903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-06 13:54:01.992201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.992208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.992514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.992522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.992811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.992818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.993114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.993121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.993418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.993426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.993700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.993707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.993930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.993939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.994236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.994243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.994423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.994430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.994788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.994796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.995109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.995116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.995434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.995441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.995752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.995759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.996059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.996066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.996384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.996390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.996716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.996722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.997055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.997062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.997371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.997378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.997547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.997555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.997821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.997829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.998154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.998162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.998462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.998470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.998788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.998796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.999114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.999121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.999434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.999441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:01.999739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:01.999749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:02.000048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:02.000055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:02.000366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:02.000373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:02.000684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:02.000691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:02.001032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:02.001040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:02.001357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:02.001364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:02.001688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:02.001696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:02.001990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:02.001998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:02.002322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:02.002330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-06 13:54:02.002643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-06 13:54:02.002650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.002934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.002941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.003260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.003267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.003595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.003602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.003894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.003901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.004242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.004249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.004556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.004563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.004943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.004952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.005152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.005160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.005388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.005395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.005703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.005710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.006010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.006017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.006326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.006334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.006650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.006657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.006958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.006966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.007260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.007267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.007555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.007563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.007876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.007884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.008201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.008209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.008496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.008504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.008801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.008809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.009110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.009116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.009412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.009419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.009709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.009715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.010035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.010042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.010348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.010355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.010671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.010678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.010996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.011004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.011310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.011317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.011624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.011632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.011976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.011983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.012273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.012280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.012598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.012605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.012899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.012906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.013214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.013221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.013510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-06 13:54:02.013516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-06 13:54:02.013813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.013820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.014109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.014117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.014361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.014368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.014675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.014681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.015000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.015007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.015326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.015333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.015651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.015658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.015977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.015985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.016147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.016155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.016465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.016472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.016768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.016776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.017113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.017119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 831151 Killed "${NVMF_APP[@]}" "$@" 00:29:38.782 [2024-11-06 13:54:02.017389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.017396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.017715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.017722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.017927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.017935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:38.782 [2024-11-06 13:54:02.018313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.018321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.018500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.018508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:38.782 [2024-11-06 13:54:02.018743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.018758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.782 [2024-11-06 13:54:02.019050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.019057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:38.782 [2024-11-06 13:54:02.019376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.782 [2024-11-06 13:54:02.019383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.019703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.019710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.020009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.020016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.020337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.020344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.020687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.020694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.021003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.021010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.021331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.021339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.021660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.021667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.021964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.021974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.022276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.022282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-06 13:54:02.022588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-06 13:54:02.022595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.022886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.022893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.023200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.023207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.023499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.023506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.023830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.023837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.024027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.024035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.024343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.024352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.024663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.024672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.024885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.024893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.025214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.025223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.025560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.025568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.025880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.025888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.026215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.026223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.026536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.026545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.026880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.026889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=832270 00:29:38.783 [2024-11-06 13:54:02.027223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.027232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 832270 00:29:38.783 [2024-11-06 13:54:02.027541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.027549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:38.783 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 832270 ']' 00:29:38.783 [2024-11-06 13:54:02.027860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.027869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.783 [2024-11-06 13:54:02.028187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.028195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:38.783 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.783 [2024-11-06 13:54:02.028502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.028511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:38.783 [2024-11-06 13:54:02.028811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.028820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.783 [2024-11-06 13:54:02.029110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.029119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.029410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.029418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.029594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.029603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.029901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.029910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.030242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.030251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.030572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.030581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.030863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.030873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.031188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.031197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.031507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.031516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.031826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.031836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.032125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-06 13:54:02.032134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-06 13:54:02.032449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.032458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.032766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.032778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.033095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.033105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.033410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.033419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.033728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.033736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.034039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.034048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.034380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.034388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.034689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.034698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.034962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.034971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.035352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.035361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.035661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.035669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.035863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.035872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.036247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.036256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.036567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.036575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.036907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.036916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.037122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.037131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.037307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.037315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.037620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.037628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.037916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.037925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.038246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.038255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.038563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.038571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.038893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.038902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.039225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.039233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.039446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.039454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.039643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.039652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.040055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.040064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.040361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.040370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.040752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.040761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.041038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.041046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.041349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.041359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.041687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.041696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.042075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.042087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.042386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.042394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-06 13:54:02.042705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-06 13:54:02.042713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.043039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.043048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.043351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.043358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.043666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.043674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.043992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.044000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.044161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.044170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.044468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.044476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.044782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.044792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.045095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.045105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.045392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.045400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.045753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.045762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.046058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.046066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.046372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.046380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.046680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.046689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.046888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.046896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.047069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.047077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.047333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.047341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.047664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.047673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.047993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.048001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.048144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.048152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.048305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.048313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.048613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.048621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.048859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.048868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.049141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.049148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.049473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.049480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.049783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.049791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.050162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.050170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.050488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.050496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.050814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.050822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.051124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.051132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.051301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.051310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.051501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.051508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.051778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.051787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.052100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.052108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.052429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.052437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.052769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.052780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.053181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.053189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-06 13:54:02.053488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-06 13:54:02.053496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.053815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.053823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.054198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.054207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.054385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.054393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.054740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.054753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.055060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.055068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.055236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.055244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.055614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.055623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.055937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.055945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.056282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.056290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.056482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.056490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.056765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.056773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.057172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.057180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.057490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.057498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.057793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.057801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.057987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.057995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.058306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.058314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.058673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.058681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.058979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.058987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.059303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.059312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.059629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.059637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.059954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.059962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.060287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.060294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.060615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.060623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.060919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.060927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.061267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.061275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.061589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.061598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.061915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.061923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.062240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.062248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.062566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.062574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.062882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.062890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.063219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.063228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.063518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.063526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.063811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.063819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.064141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.064149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.064464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.064472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.064771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-06 13:54:02.064780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-06 13:54:02.064965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.064973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.065353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.065362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.065677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.065685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.065998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.066006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.066327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.066334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.066660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.066668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.066990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.066998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.067292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.067300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.067616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.067624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.067917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.067926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.068262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.068270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.068571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.068580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.068897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.068905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.069234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.069242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.069559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.069566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.069882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.069890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.070211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.070218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.070413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.070421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.070727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.070736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.071037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.071046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.071371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.071379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.071693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.071701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.072040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.072049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.072343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.072351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.072671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.072679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.072995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.073004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.073334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.073342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.073643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.073651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.073969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.073977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.074298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.074307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.074619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-06 13:54:02.074628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-06 13:54:02.074916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.074924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.075250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.075258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.075569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.075576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.075893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.075901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.076235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.076244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.076559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.076567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.076883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.076891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.077206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.077215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.077467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.077476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.077795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.077804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.078005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.078015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.078315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.078323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.078618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.078625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.078943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.078952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.079149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.079157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.079387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.079395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.079757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.079766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.080095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.080103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.080469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.080478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.080799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.080808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.081156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.081163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.081521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.081529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.081848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.081856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.082033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.082041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.082390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.082398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.082719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.082726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.083025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.083034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.083354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.083361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.083662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.083670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.083859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.083869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.084188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.084195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.084517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.084525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.084749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-06 13:54:02.084757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-06 13:54:02.084900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.084907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.085199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.085207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.085322] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:29:38.789 [2024-11-06 13:54:02.085370] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.789 [2024-11-06 13:54:02.085530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.085538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.085847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.085854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.086148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.086157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.086472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.086481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.086792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.086800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.086979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.086987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.087170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.087178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.087411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.087420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.087722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.087731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.088064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.088073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.088240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.088249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.088555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.088564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.088733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.088741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.089090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.089099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.089274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.089283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.089594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.089602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.089771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.089780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.090094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.090103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.090269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.090278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.090630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.090638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.090908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.090917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.091241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.091249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.091571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.091580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.091888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.091896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.092200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.092209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.092525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.092533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.092712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.092720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.093028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.093039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.093356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.093365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.093670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.093678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.093971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.093980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.094321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.094330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.094662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.094671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.095054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.095063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-06 13:54:02.095358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-06 13:54:02.095367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.095574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.095582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.095880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.095888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.096213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.096222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.096514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.096522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.096769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.096778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.097080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.097088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.097452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.097461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.097806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.097814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.098148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.098156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.098355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.098364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.098691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.098700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.099037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.099046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.099388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.099397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.099744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.099756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.100116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.100124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.100431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.100438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.100751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.100760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.101004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.101012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.101200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.101208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.101408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.101416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.101730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.101737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.102155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.102163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.102356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.102365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.102656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.102664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.102984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.102993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.103315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.103323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.103631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.103639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.103988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.103998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.104307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.104315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.104627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.104635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.104918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.104927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.105222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.105230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.105535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.105545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.105875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.105884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.106209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.106217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-06 13:54:02.106522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-06 13:54:02.106532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.106822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.106831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.107149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.107158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.107505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.107514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.107823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.107831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.108143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.108151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.108471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.108479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.108783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.108791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.109124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.109132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.109491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.109499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.109815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.109823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.110161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.110169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.110483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.110492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.110807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.110816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.111151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.111159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.111445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.111453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.111767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.111775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.112110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.112117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.112429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.112437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.112733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.112742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.113034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.113043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.113362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.113371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.113697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.113705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.113999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.114008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.114324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.114332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.114655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.114662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.115019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.115027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.115335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.115343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.115668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.115676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.116020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.116028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.116345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.116353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.116671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.116679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.117003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.117011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.117334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.117341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.117660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.117667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.117858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.117868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.118205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.118214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.118528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-06 13:54:02.118539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-06 13:54:02.118860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-06 13:54:02.118868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-06 13:54:02.119188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-06 13:54:02.119196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-06 13:54:02.119514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-06 13:54:02.119522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-06 13:54:02.119824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-06 13:54:02.119832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-06 13:54:02.120152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-06 13:54:02.120161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-06 13:54:02.120459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-06 13:54:02.120467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.120754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.120763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.121064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.121074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.121350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.121359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.121718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.121726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.121945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.121954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.122284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.122292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.122593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.122600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.122903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.122913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.123229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.123237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.123421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.068 [2024-11-06 13:54:02.123429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.068 qpair failed and we were unable to recover it. 00:29:39.068 [2024-11-06 13:54:02.123650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.123658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.123969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.123978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.124362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.124370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.124709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.124718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.125035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.125044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.125239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.125248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.125656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.125665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.125962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.125970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.126162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.126170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.126461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.126469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.126784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.126792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.127135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.127143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.127327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.127336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.127668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.127676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.128027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.128035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.128200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.128208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.128513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.128521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.128835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.128844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.129177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.129184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.129510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.129518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.129823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.129831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.130169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.130177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.130375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.130384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.130584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.130595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.130886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.130894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.131235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.131242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.131554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.131562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.131753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.131761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.132061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.132069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.132374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.132382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.132717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.132726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.133123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.133132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.133438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.133446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.133841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.133849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.134112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.134120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.134440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.134448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.134757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.134765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.134952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.134961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.069 qpair failed and we were unable to recover it. 00:29:39.069 [2024-11-06 13:54:02.135278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.069 [2024-11-06 13:54:02.135287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.135596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.135604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.135813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.135822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.136129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.136137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.136459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.136468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.136654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.136663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.136967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.136975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.137270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.137278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.137586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.137594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.137909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.137917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.138245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.138252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.138535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.138543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.138944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.138953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.139318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.139327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.139628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.139635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.139941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.139949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.140124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.140133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.140418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.140426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.140635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.140643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.140824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.140832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.141112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.141120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.141429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.141437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.141635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.141643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.141863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.141870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.142151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.142158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.142367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.142375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.142682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.142690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.142894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.142903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.143177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.143185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.143495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.143503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.143812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.143820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.144116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.144124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.144435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.144443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.144754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.144762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.145045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.145052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.145390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.145397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.145615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.145623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.145944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.070 [2024-11-06 13:54:02.145953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.070 qpair failed and we were unable to recover it. 00:29:39.070 [2024-11-06 13:54:02.146268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.146276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.146573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.146581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.146903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.146911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.147229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.147237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.147547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.147555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.147884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.147893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.148214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.148222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.148412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.148421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.148729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.148737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.148979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.148987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.149345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.149353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.149546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.149554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.149844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.149852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.150186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.150194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.150542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.150552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.150759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.150767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.150943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.150950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.151257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.151264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.151574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.151582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.151772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.151781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.152060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.152068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.152400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.152408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.152721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.152729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.153013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.153021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.153368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.153377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.153645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.153653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.153938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.153945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.154273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.154282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.154592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.154600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.154782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.154791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.155101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.155109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.155423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.155431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.155753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.155761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.156094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.156102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.156410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.156419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.156726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.156735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.156899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.156907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.157252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.071 [2024-11-06 13:54:02.157260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.071 qpair failed and we were unable to recover it. 00:29:39.071 [2024-11-06 13:54:02.157609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.157618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.157913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.157921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.158293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.158301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.158605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.158613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.158912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.158920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.159216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.159223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.159566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.159574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.159870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.159878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.160194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.160202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.160548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.160557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.160873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.160881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.161185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.161193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.161501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.161509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.161776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.161784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.161981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.161989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.162152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.162160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.162499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.162509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.162820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.162829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.163125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.163133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.163421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.163429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.163752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.163761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.164040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.164047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.164357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.164366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.164684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.164692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.165005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.165013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.165329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.165337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.165644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.165652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.165844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.165852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.166172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.166181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.166366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.166375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.166701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.166710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.167020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.167029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.167340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.167349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.167659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.167668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.167981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.167988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.168289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.168297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.168604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.168611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.168910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.168918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.072 [2024-11-06 13:54:02.169115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.072 [2024-11-06 13:54:02.169122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.072 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.169446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.169453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.169761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.169769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.170047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.170055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.170412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.170419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.170712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.170721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.171037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.171046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.171361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.171369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.171678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.171686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.171997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.172005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.172310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.172318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.172629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.172638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.172912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.172920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.173230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.173238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.173548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.173556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.173865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.173874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.174208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.174217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.174433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.174440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.174757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.174767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.175049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.175057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.175365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.175374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.175705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.175712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.176024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.176032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.176341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.176349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.176674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.176681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.177068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.177077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.177391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.177399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.177722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.177730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.178054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.178062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.178231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.178240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.178552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.178560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.178728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.178737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.179055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.073 [2024-11-06 13:54:02.179063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.073 qpair failed and we were unable to recover it. 00:29:39.073 [2024-11-06 13:54:02.179375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.179383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.179689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.179697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.179978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.179986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.180301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.180308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.180603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.180611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.180785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.180794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.181041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.181048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.181395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.181403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.181697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.181706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.181993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.182001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.182356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.182363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.182527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.182535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.182731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.182738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.183030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.183037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.183348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.183355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.183680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.183688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.183985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.183993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.184297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.184306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.184601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.184610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.184831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:39.074 [2024-11-06 13:54:02.184921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.184929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.185255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.185264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.185430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.185438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.185777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.185785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.186100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.186108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.186410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.186418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.186616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.186624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.186673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.186680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.186973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.186981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.187160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.187169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.187459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.187468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.187781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.187789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.188099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.188107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.188443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.188451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.188755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.188764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.188931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.188939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.189151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.189158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.189477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.189485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.189782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.074 [2024-11-06 13:54:02.189791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.074 qpair failed and we were unable to recover it. 00:29:39.074 [2024-11-06 13:54:02.190116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.190127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.190442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.190450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.190653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.190661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.190989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.190998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.191311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.191320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.191629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.191637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.191830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.191839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.192173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.192181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.192363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.192372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.192688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.192697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.192876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.192885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.193174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.193183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.193456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.193465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.193784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.193793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.194072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.194080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.194371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.194379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.194673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.194680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.194968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.194976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.195296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.195305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.195604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.195612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.195810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.195818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.196093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.196100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.196404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.196412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.196693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.196700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.197002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.197010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.197175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.197184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.197495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.197502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.197794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.197802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.198142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.198151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.198416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.198424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.198765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.198773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.199097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.199106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.199450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.199458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.199770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.199778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.200107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.200115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.200494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.200502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.200803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.200811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.201128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.075 [2024-11-06 13:54:02.201137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.075 qpair failed and we were unable to recover it. 00:29:39.075 [2024-11-06 13:54:02.201446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.201454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.201782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.201790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.202184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.202194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.202495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.202503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.202849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.202858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.203040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.203049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.203439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.203447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.203636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.203644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.203940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.203948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.204250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.204258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.204535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.204542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.204703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.204711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.204981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.204988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.205271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.205279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.205585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.205593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.205901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.205909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.206267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.206275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.206565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.206573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.206882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.206890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.207200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.207208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.207556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.207564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.207873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.207881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.208192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.208200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.208511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.208518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.208703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.208713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.208898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.208906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.209150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.209157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.209453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.209461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.209640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.209649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.209946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.209954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.210129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.210137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.210452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.210460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.210769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.210777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.211011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.211019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.211404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.211412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.211711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.211719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.212076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.212084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.212377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.212385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.076 qpair failed and we were unable to recover it. 00:29:39.076 [2024-11-06 13:54:02.212706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.076 [2024-11-06 13:54:02.212715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.213030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.213039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.213213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.213222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.213431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.213439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.213752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.213764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.214142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.214151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.214318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.214327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.214668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.214677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.214847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.214856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.215069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.215077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.215348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.215356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.215692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.215700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.215960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.215968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.216273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.216282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.216655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.216663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.216960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.216969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.217308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.217316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.217615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.217623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.217948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.218287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.218296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.218596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.218604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.218664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.218670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.218965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.218974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.219282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.219290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.219679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.219688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.219990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.220000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.220204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.220211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.220573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.220580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.220628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.077 [2024-11-06 13:54:02.220656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.077 [2024-11-06 13:54:02.220663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.077 [2024-11-06 13:54:02.220670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.077 [2024-11-06 13:54:02.220676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.077 [2024-11-06 13:54:02.220869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.220878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.221241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.221249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.077 [2024-11-06 13:54:02.221552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.077 [2024-11-06 13:54:02.221559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.077 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.221866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.221874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.222204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.222212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.222261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:39.078 [2024-11-06 13:54:02.222374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:39.078 [2024-11-06 13:54:02.222536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:39.078 [2024-11-06 13:54:02.222572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.222579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.222537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:39.078 [2024-11-06 13:54:02.222805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.222813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.223013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.223021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.223346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.223354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.223588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.223596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.223907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.223915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.224223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.224230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.224517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.224525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.224833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.224842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.225144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.225152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.225411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.225420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.225583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.225592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.225914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.225923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.226199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.226208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.226410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.226417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.226472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.226480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.226706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.226713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.226886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.226895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.227236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.227244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.227423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.227432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.227615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.227624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.227933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.227941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.228156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.228164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.228517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.228526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.228668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.228675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.229001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.229009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.229326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.229334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.229633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.229641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.229960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.229968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.230282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.230290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.230592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.230601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.230907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.230914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.231255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.231263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.231562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.231570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.231873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.078 [2024-11-06 13:54:02.231881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.078 qpair failed and we were unable to recover it. 00:29:39.078 [2024-11-06 13:54:02.232194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.232202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.232485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.232493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.232774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.232783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.233083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.233091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.233359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.233367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.233544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.233553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.233810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.233819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.234210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.234217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.234521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.234530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.234724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.234733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.235030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.235039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.235354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.235363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.235568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.235575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.235856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.235867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.236056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.236065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.236369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.236378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.236677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.236685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.236993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.237001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.237318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.237326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.237484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.237493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.237813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.237820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.238144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.238153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.238321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.238330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.238639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.238647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.238968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.238976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.239272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.239280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.239591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.239601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.239903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.239911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.240228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.240237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.240421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.240429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.240750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.240758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.241029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.241038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.241355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.241364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.241642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.241650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.241929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.241938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.242246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.242254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.242443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.242451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.242755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.242764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.079 qpair failed and we were unable to recover it. 00:29:39.079 [2024-11-06 13:54:02.243073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.079 [2024-11-06 13:54:02.243082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.243389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.243397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.243715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.243724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.244054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.244063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.244371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.244381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.244684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.244694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.245007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.245016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.245342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.245350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.245666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.245675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.245979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.245988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.246294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.246302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.246593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.246601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.246775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.246784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.247096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.247106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.247412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.247421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.247753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.247764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.248077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.248085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.248387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.248396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.248699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.248707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.248961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.248970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.249145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.249153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.249376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.249384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.249702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.249710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.250004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.250012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.250312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.250320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.250640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.250648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.250958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.250966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.251249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.251256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.251555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.251563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.251878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.251887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.252211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.252220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.252544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.252552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.252859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.252866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.253156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.253164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.253439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.253447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.253790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.253798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.254098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.254107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.080 [2024-11-06 13:54:02.254419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.080 [2024-11-06 13:54:02.254427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.080 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.254726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.254735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.255084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.255093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.255352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.255361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.255673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.255682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.255995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.256003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.256173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.256182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.256494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.256503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.256700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.256709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.256972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.256980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.257292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.257301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.257612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.257620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.257834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.257841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.258143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.258150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.258434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.258442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.258753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.258762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.258948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.258956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.259259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.259267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.259598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.259608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.259911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.259919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.260259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.260267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.260575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.260583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.260925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.260934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.261238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.261246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.261565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.261573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.261876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.261885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.262180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.262187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.262528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.262536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.262848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.262856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.263155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.263163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.263456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.263465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.263784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.263792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.264110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.264119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.264449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.264457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.264758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.264766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.265087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.265095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.265407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.265415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.265743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.265754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.266075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.081 [2024-11-06 13:54:02.266083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.081 qpair failed and we were unable to recover it. 00:29:39.081 [2024-11-06 13:54:02.266392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.266400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.266558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.266567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.266921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.266929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.267225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.267233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.267555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.267563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.267873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.267881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.268185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.268193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.268481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.268489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.268754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.268762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.269093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.269101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.269402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.269409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.269601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.269609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.269908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.269916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.270229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.270237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.270499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.270507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.270703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.270711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.270965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.270973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.271279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.271286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.271603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.271612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.271944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.271953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.272139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.272148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.272320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.272328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.272660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.272668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.272968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.272976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.273243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.273251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.273559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.273567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.273920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.273928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.274247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.274254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.274548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.274556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.274911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.274919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.275253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.275262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.275549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.275557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.275861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.275869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.082 [2024-11-06 13:54:02.276142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.082 [2024-11-06 13:54:02.276151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.082 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.276454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.276462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.276751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.276759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.277034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.277042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.277363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.277371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.277671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.277680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.278000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.278009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.278282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.278291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.278601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.278610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.278910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.278918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.279215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.279223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.279411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.279418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.279725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.279733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.280044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.280052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.280337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.280345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.280657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.280665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.280982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.280990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.281302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.281310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.281643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.281651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.281863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.281871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.282180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.282188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.282494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.282503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.282789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.282798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.283164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.283171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.283489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.283497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.283795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.283803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.284138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.284147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.284449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.284456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.284764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.284772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.285080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.285088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.285420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.285427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.285729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.285737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.286010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.286019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.286207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.286215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.286498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.286506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.286813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.286821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.287131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.287139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.287444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.287452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.287736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.083 [2024-11-06 13:54:02.287744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.083 qpair failed and we were unable to recover it. 00:29:39.083 [2024-11-06 13:54:02.287926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.287933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.288252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.288260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.288531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.288539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.288870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.288878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.289187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.289195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.289367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.289375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.289709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.289717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.290048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.290056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.290331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.290338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.290607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.290615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.290909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.290918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.291238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.291246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.291561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.291569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.291882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.291890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.292215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.292223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.292525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.292533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.292875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.292883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.293185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.293193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.293475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.293484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.293762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.293770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.294074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.294082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.294394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.294403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.294706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.294714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.295024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.295033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.295299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.295307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.295504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.295512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.295817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.295825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.296169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.296179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.296455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.296463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.296766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.296775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.297083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.297091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.297258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.297266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.297471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.297479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.297797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.297805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.298110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.298118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.298314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.298322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.298633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.298641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.298817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.084 [2024-11-06 13:54:02.298826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.084 qpair failed and we were unable to recover it. 00:29:39.084 [2024-11-06 13:54:02.299135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.299143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.299476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.299484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.299655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.299664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.299962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.299971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.300276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.300283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.300569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.300577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.300883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.300891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.301197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.301205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.301385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.301395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.301717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.301726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.302002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.302010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.302333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.302342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.302643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.302651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.302932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.302940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.303119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.303128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.303458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.303466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.303764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.303772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.304144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.304152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.304334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.304342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.304665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.304673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.304898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.304907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.304947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.304954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.305124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.305133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.305323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.305331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.305610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.305618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.305790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.305799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.305995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.306003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.306271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.306280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.306581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.306589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.306629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.306637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.306915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.306924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.307304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.307312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.307490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.307498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.307789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.307797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.308129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.308137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.308426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.308433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.308732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.308740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.309088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.309096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.085 [2024-11-06 13:54:02.309403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.085 [2024-11-06 13:54:02.309411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.085 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.309752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.309761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.310050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.310058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.310374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.310383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.310561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.310570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.310879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.310889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.311219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.311228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.311531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.311539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.311864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.311872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.312057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.312066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.312252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.312260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.312605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.312613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.312808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.312817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.313104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.313112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.313379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.313387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.313578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.313586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.313790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.313798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.314073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.314082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.314396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.314405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.314774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.314782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.314942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.314951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.315141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.315148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.315362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.315370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.315533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.315543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.315857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.315865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.316177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.316185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.316492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.316499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.316802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.316811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.316975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.316982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.317164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.317172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.317502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.317510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.317811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.317821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.318136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.318144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.318427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.318436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.086 [2024-11-06 13:54:02.318768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.086 [2024-11-06 13:54:02.318776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.086 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.318935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.318943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.319272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.319279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.319613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.319621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.319869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.319877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.320198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.320205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.320474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.320482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.320653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.320663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.320707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.320715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.321014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.321022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.321316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.321324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.321649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.321657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.321948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.321956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.322119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.322129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.322431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.322439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.322730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.322739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.323047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.323056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.323383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.323393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.323523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.323533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.323679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.323689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.323866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.323875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.324045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.324054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.324392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.324399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.324567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.324576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.324885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.324893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.325056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.325064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.325368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.325375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.325677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.325685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.326003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.326011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.326374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.326382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.326689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.326697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.326897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.326906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.327258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.327267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.327568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.327577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.327878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.327886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.328261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.328269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.328598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.328606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.328908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.328920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.329103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.087 [2024-11-06 13:54:02.329111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.087 qpair failed and we were unable to recover it. 00:29:39.087 [2024-11-06 13:54:02.329368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.329376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.329664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.329672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.329974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.329982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.330292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.330300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.330602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.330609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.330895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.330904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.331060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.331068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.331380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.331387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.331691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.331698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.332034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.332042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.332330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.332339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.332644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.332653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.332973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.332981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.333286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.333295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.333646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.333654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.333982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.333991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.334295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.334304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.334618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.334626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.334963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.334971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.335292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.335300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.335606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.335616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.335908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.335917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.336244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.336252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.336495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.336503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.336821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.336829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.337183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.337192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.337483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.337491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.337829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.337836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.338016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.338025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.338371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.338379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.338709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.338717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.339031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.339040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.339230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.339238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.339556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.339564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.339737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.339750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.340058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.340066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.340407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.340415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.340734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.340742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.088 qpair failed and we were unable to recover it. 00:29:39.088 [2024-11-06 13:54:02.341040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.088 [2024-11-06 13:54:02.341050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.341234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.341243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.341572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.341580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.341883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.341891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.342207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.342214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.342516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.342525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.342764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.342776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.343105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.343114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.343449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.343458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.343767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.343776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.344107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.344117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.344426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.344435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.344631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.344640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.344962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.344971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.345300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.345308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.345609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.345617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.345806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.345815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.346095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.346103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.346403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.346411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.346723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.346732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.347068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.347076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.347393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.347402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.347708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.347716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.348023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.348031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.348339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.348347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.348544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.348553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.348711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.348719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.349040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.349048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.349362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.349369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.349673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.349682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.349998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.350005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.350308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.350316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.350493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.350502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.350692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.350700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.351025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.351033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.351359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.351367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.351537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.351545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.351813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.351822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.089 [2024-11-06 13:54:02.352131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.089 [2024-11-06 13:54:02.352139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.089 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.352455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.352463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.352794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.352804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.352983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.352991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.353306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.353314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.353483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.353492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.353795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.353803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.354128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.354136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.354462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.354470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.354773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.354782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.355127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.355135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.355435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.355443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.355759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.355767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.356074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.356082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.356292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.356302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.356637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.356645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.356960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.356968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.357271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.357278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.357587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.357595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.357922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.357931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.358322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.358330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.358632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.358640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.358812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.358820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.359156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.359164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.359537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.359547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.359837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.359845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.360179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.360187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.360487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.360496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.360800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.360808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.361194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.361202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.361438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.361446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.361714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.361721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.362038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.362047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.362364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.362372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.362660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.362668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.362837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.362845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.363153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.363160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.363455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.363463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.363627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.090 [2024-11-06 13:54:02.363635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.090 qpair failed and we were unable to recover it. 00:29:39.090 [2024-11-06 13:54:02.363859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.363867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.364040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.364048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.364219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.364227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.364483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.364490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.364691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.364699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.364886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.364894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.365084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.365092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.365377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.365385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.365699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.365707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.365871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.365880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.365998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.366006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.366317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.366325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.366657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.366665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.366846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.366854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.367183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.367191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.367513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.367521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.367862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.367870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.368193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.368200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.368468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.368476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.368790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.368798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.369056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.369064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.369364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.369373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.369536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.369544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.369628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.369636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.369974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.369983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.370167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.370176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.370481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.370489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.370773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.370781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.370954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.370963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.371077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.371085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.371391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.371400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.371613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.371621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.371945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.371953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.091 qpair failed and we were unable to recover it. 00:29:39.091 [2024-11-06 13:54:02.372134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.091 [2024-11-06 13:54:02.372142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.372390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.372397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.372723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.372731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.373097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.373104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.373283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.373292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.373548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.373555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.373857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.373865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.374049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.374057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.374372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.374380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.374611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.374619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.374835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.374843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.375027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.375035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.375194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.375201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.375558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.375565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.375735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.375744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.376059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.376067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.376404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.376412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.376585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.376593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.376781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.376789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.377136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.377144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.377327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.377335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.377650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.377657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.377990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.377998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.378164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.378172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.378330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.378337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.378647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.378655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.379000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.379008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.379191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.379199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.379511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.379519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.379682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.379690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.379979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.379986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.380298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.380306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.380639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.380647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.380870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.380877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.381187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.381194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.381358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.381365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.381672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.381680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.381977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.381986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-06 13:54:02.382151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-06 13:54:02.382160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.382353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.382361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.382703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.382712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.383141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.383150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.383322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.383330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.383513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.383521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.383833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.383842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.384083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.384091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.384394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.384402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.384561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.384571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.384851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.384859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.385163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.385171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.385496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.385504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.385816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.385824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.386171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.386179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.386332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.386339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.386522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.386530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.386772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.386781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.387070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.387078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.387393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.387401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.387704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.387712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.387870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.387878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.388158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.388165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.388434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.388442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.388621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.388630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.388800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.388808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.389107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.389115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.389435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.389443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.389755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.389764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.390080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.390089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.390424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.390432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.390738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.390751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.391047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.391055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.391326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.391336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.391672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.391681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.391868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.391877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.392184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.392192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.392506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.392514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.392849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-06 13:54:02.392858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-06 13:54:02.393156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.393166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.393467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.393475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.393790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.393798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.394067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.394075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.394375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.394383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.394570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.394579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.394844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.394853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.395032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.395041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.395306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.395314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.395621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.395629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.395914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.395923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.396253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.396260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.396572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.396580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.396903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.396911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.397094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.397102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.397415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.397422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.397729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.397737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.398059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.398067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.398374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.398381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.398670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.398678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.399005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.399013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.399330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.399338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.399649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.399657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.400077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.400085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.400297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.400306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.400609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.400618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.400968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.400976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.401310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.401319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.401622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.401629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.401905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.401913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.402284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.402292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.402636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.402645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.402976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.402985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.403293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.403301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.403591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.403599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.403943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.403951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.404272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.404280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.404543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.404551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-06 13:54:02.404870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-06 13:54:02.404878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.405110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.405118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.405438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.405448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.405726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.405734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.405920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.405928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.406221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.406228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.406404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.406413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.406689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.406698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.407053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.407062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.407341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.407349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.407690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.407698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.407985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.407994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.408306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.408314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.408644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.408652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.408970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.408978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.409294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.409303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.409496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.409504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.409839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.409847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.410173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.410181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.410481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.410489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.410796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.410804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.411081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.411089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.411401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.411408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.411723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.411732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.412042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.412051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.412353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.412362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.412687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.412695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.412999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.413007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.413314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.413322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.413638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.413646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.413960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.413968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.414297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.414305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.414534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.414542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.414815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.414824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-06 13:54:02.415112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-06 13:54:02.415120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.415424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.415432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.415804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.415813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.416086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.416093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.416369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.416376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.416687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.416695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.416967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.416976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.417275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.417283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.417593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.417603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.417919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.417928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.418242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.418251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.418552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.418560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.418876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.418884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.419207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.419214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.419527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.419535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.419814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.419823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.420143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.420151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.420474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.420482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.420761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.420769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.421075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.421083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.421357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.421366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.421679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.421688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.421869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.421877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.422174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.422182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.422497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.422505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.422814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.422822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.423210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.423219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.423560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.423569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.423740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.423752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.424084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.424092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.424386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.424394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.424565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.424574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.424887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.424895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.425221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.425229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.425401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.425408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.425744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.425755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.426079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.426087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.426393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.426400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.426719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-06 13:54:02.426727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-06 13:54:02.427041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-06 13:54:02.427049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.427358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.427368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.427550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.427559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.427788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.427796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.428138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.428145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.428210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.428216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.428377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.428385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.428676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.428684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.429004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.429012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.429350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.429359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.429539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.429548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.429723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.429732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.429929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.429938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.430221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.376 [2024-11-06 13:54:02.430229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.376 qpair failed and we were unable to recover it. 00:29:39.376 [2024-11-06 13:54:02.430394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.430402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.430731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.430740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.431052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.431060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.431243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.431252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.431439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.431447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.431756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.431764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.431919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.431927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.432080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.432088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.432392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.432400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.432715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.432723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.433025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.433033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.433343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.433351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.433509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.433518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.433830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.433839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.434139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.434147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.434340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.434347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.434462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.434469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.434762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.434770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.435051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.435059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.435342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.435351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.435666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.435674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.435859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.435869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.436203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.436211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.436333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.436341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.436649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.436656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.436841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.436849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.437176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.437184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.437469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.437477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.437815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.437824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.437977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.437985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.438392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.438399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.438735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.438742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.439069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.439077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.439116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.439122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.439274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.439282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.439442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.439451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.439736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.439744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.440031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.377 [2024-11-06 13:54:02.440038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.377 qpair failed and we were unable to recover it. 00:29:39.377 [2024-11-06 13:54:02.440309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.440318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.440633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.440641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.440799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.440806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.441205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.441213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.441514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.441523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.441835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.441843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.442216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.442224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.442509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.442517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.442702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.442710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.442892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.442900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.443082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.443090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.443435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.443443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.443775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.443783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.444053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.444061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.444213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.444221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.444520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.444527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.444860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.444868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.445199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.445207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.445382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.445391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.445736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.445743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.446038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.446047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.446353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.446360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.446665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.446673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.446968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.446976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.447267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.447275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.447589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.447597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.447847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.447855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.448185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.448194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.448530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.448539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.448841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.448850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.449159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.449168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.449473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.449481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.449778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.449786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.450065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.450073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.450248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.450256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.450533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.450541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.450728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.450738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.451048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.378 [2024-11-06 13:54:02.451059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.378 qpair failed and we were unable to recover it. 00:29:39.378 [2024-11-06 13:54:02.451103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.451109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.451292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.451299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.451450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.451456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.451769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.451777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.452143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.452151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.452351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.452358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.452649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.452657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.452821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.452830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.453110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.453118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.453387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.453395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.453757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.453766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.454074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.454082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.454400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.454408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.454591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.454599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.454837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.454845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.455194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.455202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.455502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.455510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.455693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.455702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.455985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.455993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.456284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.456292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.456608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.456615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.456917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.456925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.457065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.457072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.457252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.457259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.457602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.457610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.457949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.457957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.458276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.458284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.458599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.458607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.458779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.458789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.458976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.458984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.459259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.459267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.459635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.459643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.459971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.459980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.460275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.460282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.460571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.460578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.460862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.460870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.461141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.461150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.379 [2024-11-06 13:54:02.461452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.379 [2024-11-06 13:54:02.461460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.379 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.461759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.461767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.462057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.462066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.462378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.462385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.462688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.462696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.462971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.462979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.463269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.463278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.463582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.463589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.463907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.463916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.464201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.464209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.464546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.464555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.464863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.464870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.465195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.465203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.465506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.465514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.465830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.465839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.466043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.466051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.466367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.466375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.466657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.466666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.466994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.467003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.467318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.467327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.467631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.467639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.467911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.467919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.468269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.468277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.468487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.468496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.468850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.468858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.469183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.469191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.469482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.469490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.469792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.469800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.470111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.470119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.470423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.470431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.470624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.470632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.470906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.470914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.471089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.471097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.471434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.471442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.471773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.471781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.472094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.472102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.472476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.472484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.472683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.472691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.472970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.380 [2024-11-06 13:54:02.472978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.380 qpair failed and we were unable to recover it. 00:29:39.380 [2024-11-06 13:54:02.473169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.473175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.473438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.473446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.473757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.473765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.474057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.474067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.474375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.474384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.474685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.474694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.475012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.475021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.475309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.475317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.475624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.475632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.475927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.475935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.476258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.476266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.476550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.476558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.476874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.476883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.477191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.477200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.477374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.477382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.477573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.477581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.477846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.477855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.478031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.478039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.478351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.478359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.478691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.478698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.479008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.479016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.479334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.479342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.479647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.479655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.479941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.479949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.480234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.480242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.480544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.480552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.480863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.480871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.481180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.481188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.481493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.481501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.481814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.481822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.482092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.482101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.381 qpair failed and we were unable to recover it. 00:29:39.381 [2024-11-06 13:54:02.482419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.381 [2024-11-06 13:54:02.482428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.482611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.482618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.482935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.482945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.483246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.483255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.483578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.483586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.483889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.483897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.484203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.484211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.484512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.484520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.484808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.484816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.485149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.485158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.485466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.485473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.485742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.485753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.485923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.485932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.486260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.486268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.486579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.486588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.486890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.486898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.487213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.487221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.487525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.487533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.487836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.487844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.488156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.488164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.488450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.488457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.488725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.488733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.489047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.489056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.489355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.489364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.489648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.489656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.490047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.490056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.490323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.490331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.490632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.490640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.490921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.490929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.491265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.491273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.491578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.491586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.491900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.491908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.492085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.492093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.492423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.492431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.492734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.492743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.493077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.493085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.493414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.493422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.493724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.493733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.382 [2024-11-06 13:54:02.494063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.382 [2024-11-06 13:54:02.494071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.382 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.494389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.494398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.494587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.494595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.494754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.494762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.495093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.495101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.495414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.495422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.495705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.495713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.495976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.495985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.496288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.496296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.496595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.496603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.496918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.496926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.497118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.497126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.497415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.497423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.497729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.497737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.498066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.498076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.498243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.498252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.498512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.498520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.498805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.498814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.499138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.499147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.499469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.499477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.499641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.499650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.499873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.499881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.500223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.500232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.500402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.500409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.500713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.500721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.501043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.501052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.501350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.501357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.501625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.501634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.501796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.501803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.501971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.501978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.502300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.502308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.502505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.502513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.502821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.502829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.503089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.503097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.503284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.503293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.503614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.503622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.503880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.503888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.504202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.504210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.504512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.383 [2024-11-06 13:54:02.504520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.383 qpair failed and we were unable to recover it. 00:29:39.383 [2024-11-06 13:54:02.504700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.504709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.504883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.504892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.505171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.505179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.505366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.505374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.505711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.505719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.505922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.505931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.506086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.506094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.506252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.506261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.506464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.506471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.506775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.506784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.506966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.506974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.507282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.507290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.507452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.507460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.507804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.507812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.508091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.508099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.508272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.508281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.508468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.508476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.508793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.508801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.509068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.509076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.509365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.509373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.509641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.509649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.509958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.509966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.510306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.510314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.510611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.510620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.510910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.510917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.511239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.511247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.511563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.511572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.511872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.511880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.512200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.512207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.512529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.512537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.512842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.512850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.513164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.513172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.513472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.513480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.513814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.513822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.514147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.514155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.514439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.514447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.514757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.514765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.515041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.384 [2024-11-06 13:54:02.515049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.384 qpair failed and we were unable to recover it. 00:29:39.384 [2024-11-06 13:54:02.515223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.515232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.515569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.515577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.515750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.515758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.516052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.516061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.516370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.516380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.516716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.516723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.517023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.517031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.517298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.517307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.517623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.517631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.517933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.517942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.518268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.518276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.518594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.518601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.518902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.518910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.519223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.519231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.519532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.519540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.519843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.519852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.520156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.520164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.520450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.520458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.520726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.520734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.521028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.521037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.521373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.521382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.521665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.521673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.521903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.521911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.522217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.522225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.522526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.522534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.522867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.522875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.523175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.523183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.523498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.523505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.523812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.523820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.524080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.524089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.524407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.524414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.524744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.524757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.525024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.525032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.525370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.385 [2024-11-06 13:54:02.525378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.385 qpair failed and we were unable to recover it. 00:29:39.385 [2024-11-06 13:54:02.525682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.525690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.525965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.525974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.526278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.526286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.526578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.526586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.526912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.526921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.527249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.527257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.527432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.527441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.527751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.527759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.528078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.528086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.528417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.528425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.528675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.528685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.528982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.528991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.529301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.529309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.529612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.529620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.529903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.529911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.530205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.530213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.530522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.530530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.530808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.530816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.531108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.531116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.531429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.531437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.531726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.531734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.532033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.532042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.532360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.532368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.532702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.532711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.533011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.533020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.533338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.533346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.533654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.533662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.533976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.533984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.534298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.534305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.534613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.534621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.534910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.534918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.535225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.535233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.535547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.535556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.535864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.535872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.536174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.536183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.536398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.536405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.536575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.536584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.536897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.536904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.386 [2024-11-06 13:54:02.537212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.386 [2024-11-06 13:54:02.537220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.386 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.537550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.537558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.537733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.537742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.537948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.537956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.538284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.538292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.538627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.538635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.538934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.538942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.539244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.539252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.539426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.539435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.539623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.539631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.539937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.539946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.540265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.540273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.540452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.540461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.540756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.540764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.541028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.541036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.541350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.541358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.541659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.541667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.542002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.542009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.542211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.542220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.542528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.542536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.542805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.542813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.543151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.543159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.543461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.543469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.543783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.543791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.544093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.544102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.544271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.544280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.544607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.544615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.544904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.544913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.545226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.545234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.545518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.545526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.545792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.545801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.545978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.545986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.546297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.546304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.546591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.546598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.546915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.546923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.547232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.547240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.547554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.547562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.547727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.547736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.547820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.387 [2024-11-06 13:54:02.547828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.387 qpair failed and we were unable to recover it. 00:29:39.387 [2024-11-06 13:54:02.548117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.548125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.548260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.548268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.548433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.548441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.548636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.548643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.548840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.548849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.549129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.549136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.549445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.549453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.549752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.549760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.550087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.550095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.550279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.550287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.550603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.550612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.550916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.550924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.551265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.551273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.551604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.551615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.551856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.551865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.552197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.552204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.552534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.552543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.552848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.552856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.553169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.553177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.553490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.553499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.553768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.553777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.553964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.553971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.554307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.554315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.554688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.554696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.555005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.555013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.555192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.555200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.555241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.555250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.555545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.555553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.555733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.555742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.555997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.556005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.556336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.556344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.556383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.556389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.556561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.556569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.556912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.556920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.557232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.557240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.557543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.557550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.557862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.557871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-06 13:54:02.557910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-06 13:54:02.557916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.558257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.558265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.558564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.558573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.558732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.558741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.559072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.559081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.559396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.559404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.559707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.559715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.559993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.560000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.560336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.560344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.560519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.560527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.560694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.560703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.560994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.561002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.561294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.561301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.561617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.561625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.561811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.561819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.562120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.562128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.562414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.562423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.562727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.562736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.562795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.562803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.563129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.563136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.563340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.563348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.563523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.563531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.563856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.563864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.564050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.564058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.564234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.564241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.564549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.564557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.564753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.564761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.565058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.565066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.565351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.565359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.565541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.565549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.565881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.565889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.566171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.566178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.566361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.566369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.566699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.566707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.567031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.567040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.567222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.567230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.567519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.567526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.567652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.567660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.567957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-06 13:54:02.567965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-06 13:54:02.568119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.568126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.568221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.568228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.568309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.568315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.568581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.568589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.568909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.568917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.569246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.569254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.569437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.569447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.569489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.569497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.569782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.569790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.569969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.569976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.570155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.570163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.570434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.570442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.570630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.570638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.570829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.570837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.571012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.571020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.571294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.571302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.571476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.571485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.571657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.571666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.571962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.571970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.572271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.572278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.572464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.572473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.572776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.572784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.573064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.573072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.573261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.573270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.573474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.573483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.573786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.573794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.574043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.574051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.574214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.574222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.574517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.574525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.574826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.574834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.575036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.575044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.575373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.575381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.575555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.575564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.575728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-06 13:54:02.575735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-06 13:54:02.575959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.575968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.576251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.576259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.576506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.576514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.576846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.576855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.577160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.577168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.577486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.577494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.577786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.577794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.578092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.578101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.578414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.578423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.578723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.578732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.579077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.579086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.579273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.579281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.579461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.579469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.579733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.579742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.580087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.580096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.580434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.580443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.580753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.580761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.581040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.581048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.581318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.581326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.581634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.581642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.581975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.581982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.582142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.582151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.582441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.582449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.582754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.582764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.583039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.583047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.583358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.583365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.583656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.583664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.583982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.583990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.584293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.584301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.584611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.584620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.584793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.584802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.585130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.585138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.585455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.585463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.585772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.585780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.586089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.586097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.586401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.586409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.586716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.586724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-06 13:54:02.586919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-06 13:54:02.586928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.587212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.587219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.587403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.587412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.587756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.587764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.588079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.588087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.588382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.588390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.588737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.588749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.589028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.589036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.589343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.589350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.589633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.589641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.589855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.589863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.590174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.590182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.590494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.590503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.590809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.590817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.591118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.591126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.591441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.591448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.591758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.591766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.592032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.592040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.592356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.592363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.592667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.592675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.592987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.592995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.593329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.593337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.593638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.593647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.593962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.593970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.594264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.594273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.594559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.594566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.594882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.594893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.595180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.595188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.595491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.595499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.595803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.595811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.596112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.596120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.596387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.596395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.596713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.596722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.597049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-06 13:54:02.597057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-06 13:54:02.597324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.597332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.597638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.597646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.597842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.597851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.598140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.598148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.598471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.598479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.598793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.598801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.599104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.599112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.599395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.599404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.599713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.599721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.600020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.600028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.600213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.600222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.600497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.600505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.600818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.600826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.601136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.601145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.601416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.601423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.601757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.601765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.602031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.602039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.602306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.602314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.602614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.602623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.602957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.602966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.603292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.603301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.603603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.603611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.603914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.603922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.604219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.604227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.604547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.604555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.604856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.604864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.605178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.605186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.605523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.605532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.605862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.605872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.606037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.606046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.606360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.606370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.606700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.606709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.607043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.607054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.607351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.607360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.607670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.607678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.608083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.608091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.608407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.608415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.608716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-06 13:54:02.608723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-06 13:54:02.609040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.609048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.609395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.609403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.609619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.609627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.609928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.609936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.610214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.610222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.610509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.610517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.610785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.610794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.611098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.611106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.611485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.611493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.611818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.611826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.612143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.612151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.612453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.612462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.612778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.612786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.613095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.613103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.613405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.613412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.613696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.613704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.613921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.613930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.614231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.614239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.614550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.614558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.614752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.614761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.615102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.615110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.615394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.615402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.615628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.615636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.615793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.615800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.615971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.615980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.616239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.616247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.616409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.616417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.616578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.616586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.616895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.616904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.617078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.617087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.617397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.617405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.617707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.617715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.617759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.617768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-06 13:54:02.618073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-06 13:54:02.618080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.618242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.618253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.618426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.618433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.618728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.618737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.618944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.618952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.619263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.619271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.619599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.619606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.619873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.619881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.620054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.620063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.620405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.620413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.620712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.620721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.621008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.621017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.621331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.621338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.621381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.621388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.621559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.621568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.621760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.621769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.622079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.622087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.622395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.622403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.622586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.622594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.622964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.622973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.623278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.623286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.623467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.623475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.623652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.623660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.623999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.624007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.624270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.624278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.624583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.624591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.624918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.624927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.625120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.625128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.625419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.625427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.625627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.625635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.625813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.625821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.625975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.625984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.626268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.626277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.626433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.626442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.626632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-06 13:54:02.626640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-06 13:54:02.626944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.626953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.627266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.627274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.627455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.627464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.627786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.627794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.627966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.627973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.628287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.628295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.628596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.628605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.628887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.628895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.629204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.629212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.629528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.629536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.629836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.629844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.630175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.630183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.630482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.630491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.630669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.630678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.630970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.630979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.631268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.631278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.631470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.631477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.631631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.631639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.631961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.631968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.632317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.632325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.632632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.632640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.632932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.632939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.633123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.633132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.633434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.633442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.633782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.633791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.633944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.633951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.634299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.634307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.634489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.634498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.634655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.634663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.635006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.635013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.635329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.635337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.635666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.635674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.635953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.635961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.636275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-06 13:54:02.636283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-06 13:54:02.636462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.636470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.636649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.636657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.636844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.636852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.637193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.637201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.637378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.637388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.637722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.637731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.638047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.638056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.638369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.638376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.638642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.638649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.638950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.638959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.639136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.639144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.639471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.639479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.639779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.639789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.640193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.640201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.640502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.640510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.640775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.640782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.641101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.641109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.641400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.641408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.641446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.641453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.641719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.641727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.641911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.641919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.642276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.642284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.642564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.642572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.642880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.642888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.643152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.643160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.643333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.643342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.643649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.643656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.643821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.643829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.643873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.643889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.644195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.644203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.644494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.644502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.644682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.644691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.644855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.644863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.645197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.645205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.645389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.645397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.645540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.645548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.645770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.645778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.646085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-06 13:54:02.646094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-06 13:54:02.646246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.646255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.646585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.646593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.646896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.646904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.647111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.647119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.647292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.647300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.647610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.647617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.647905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.647913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.648276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.648286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.648556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.648564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.648880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.648887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.649222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.649230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.649534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.649542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.649695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.649703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.650044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.650052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.650395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.650405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.650571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.650580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.650917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.650925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.651155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.651163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.651466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.651473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.651826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.651834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.652134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.652142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.652423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.652431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.652781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.652789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.652938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.652945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.653231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.653239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.653551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.653559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.653603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.653610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.653920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.653928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.654237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.654245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.654406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.654415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.654460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.654468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.654619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.654627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.654955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.654964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.655283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.655291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-06 13:54:02.655475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-06 13:54:02.655484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.655645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.655653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.655837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.655846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.656054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.656062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.656350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.656357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.656539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.656547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.656859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.656867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.657174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.657184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.657483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.657492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.657778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.657786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.658101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.658109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.658446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.658453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.658606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.658614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.658909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.658917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.659231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.659239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.659418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.659427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.659668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.659677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.660002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.660009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.660170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.660178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.660456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.660463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.660623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.660631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.660858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.660865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.661135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.661143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.661443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.661451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.661807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.661816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.661973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.661981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.662321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.662328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.662631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.662638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.662970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.662978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.663264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.663271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.663574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.663582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.663739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.663760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.663946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.663953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.664228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-06 13:54:02.664236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-06 13:54:02.664521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.664530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.664708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.664717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.664898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.664907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.665208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.665216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.665520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.665528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.665820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.665828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.665866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.665873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.665911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.665918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.666097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.666105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.666452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.666460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.666756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.666764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.667105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.667113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.667438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.667445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.667660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.667671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.667850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.667858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.668133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.668140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.668406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.668413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.668719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.668727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.669021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.669029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.669294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.669303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.669608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.669617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.669930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.669938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.670263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.670271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.670577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.670585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.670902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.670911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.671233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.671241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.671280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.671286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.671552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.671560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.671873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.671880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.672037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.672044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.672315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.672323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.672625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.672634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.672906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.672914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.673224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.673233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.673535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.673543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.673866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.673874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.674203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.674212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.674478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.674486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-06 13:54:02.674789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-06 13:54:02.674796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.675088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.675096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.675469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.675476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.675778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.675786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.675958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.675965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.676144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.676152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.676466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.676474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.676808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.676817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.677078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.677086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.677436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.677444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.677754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.677762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.677961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.677969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.678125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.678133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.678286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.678293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.678580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.678588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.678899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.678910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.679227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.679235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.679503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.679511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.679821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.679829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.680038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.680046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.680360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.680367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.680678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.680686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.680974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.680983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.681248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.681256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.681460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.681468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.681752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.681760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.682043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.682051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.682330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.682338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.682618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.682625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.682929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.682938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.683232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.683241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.683553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.683562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.683732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.683740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.684077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.684085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.684262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.684270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.684541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.684549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.684875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.684884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.685200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.685208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-06 13:54:02.685516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-06 13:54:02.685524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.685839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.685848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.686152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.686160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.686466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.686474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.686808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.686816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.687140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.687148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.687414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.687422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.687730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.687739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.687934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.687942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.688305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.688313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.688619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.688626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.688909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.688917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.689103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.689111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.689409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.689416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.689755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.689764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.690046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.690054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.690374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.690381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.690675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.690685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.691005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.691013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.691290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.691298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.691591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.691600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.691885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.691894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.692071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.692079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.692394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.692403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.692705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.692712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.693027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.693035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.693351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.693359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.693660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.693669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.693957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.693965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.694313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.694320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.694623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.694630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.694959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.694967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.695278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.695286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.695612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.695620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.695923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.695932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.696267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.696275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.696475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.696483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.696787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-06 13:54:02.696796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-06 13:54:02.697098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.697107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.697414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.697425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.697736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.697749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.697963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.697972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.698134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.698143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.698472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.698482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.698801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.698809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.699150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.699158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.699461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.699470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.699788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.699796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.700112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.700121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.700420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.700429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.700751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.700760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.701100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.701108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.701414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.701422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.701715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.701725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.702051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.702061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.702369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.702378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.702698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.702707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.703007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.703017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.703325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.703332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.703523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.703531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.703808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.703816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.704158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.704166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.704483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.704491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.704671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.704679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.704986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.704994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.705149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.705158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.705334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.705341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.705657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.705665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.705734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.705741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.706082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.706091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.706423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.706431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.706737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.706749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.706913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.706921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.707091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.707098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.707411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.707419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.707617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-06 13:54:02.707625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-06 13:54:02.707936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.707944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.708133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.708141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.708482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.708491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.708799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.708807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.709037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.709044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.709316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.709324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.709663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.709671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.709968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.709976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.710142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.710151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.710432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.710440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.710781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.710790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.710999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.711007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.711190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.711198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.711508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.711516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.711863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.711871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.712175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.712183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.712331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.712339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.712610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.712618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.712797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.712806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.712978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.712986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.713161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.713178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.713331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.713340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.713617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.713626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.713923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.713932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.714303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.714312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.714614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.714623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.714959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.714967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.715256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.715264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.715429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.715437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.715595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.715602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.715920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.715928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.716248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.716256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.716572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.716580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.716889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.716897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-06 13:54:02.717213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-06 13:54:02.717222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-06 13:54:02.717429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-06 13:54:02.717438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-06 13:54:02.717532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-06 13:54:02.717539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 [2024-11-06 13:54:02.718319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 [2024-11-06 13:54:02.719144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Read completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.405 starting I/O failed 00:29:39.405 Write completed with error (sct=0, sc=8) 00:29:39.406 starting I/O failed 00:29:39.406 Read completed with error (sct=0, sc=8) 00:29:39.406 starting I/O failed 00:29:39.406 Read completed with error (sct=0, sc=8) 00:29:39.406 starting I/O failed 00:29:39.406 Write completed with error (sct=0, sc=8) 00:29:39.406 starting I/O failed 00:29:39.406 [2024-11-06 13:54:02.719461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.406 [2024-11-06 13:54:02.719965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.719995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.720171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.720182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.720372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.720380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.720678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.720687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.720874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.720882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.721139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.721147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.721350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.721358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.721670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.721679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.721853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.721861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.722200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.722208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.722286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.722292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.722455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.722463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.722793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.722801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.723026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.723034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.723335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.723343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.723506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.723515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.723704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.723712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.723896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.723906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.724100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.724107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.724273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.724283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.724458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.724466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.724772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.724782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.724955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.724964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.725124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.725132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.725360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.725368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.725708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.725717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.725905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.725914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.726242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.726250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.726558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.726567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.726739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.726752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.727053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.727063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.727218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.727228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.727433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.727441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.727731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.727740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.727976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.727984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.728170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-06 13:54:02.728179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-06 13:54:02.728326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.728334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.728505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.728514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.728822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.728831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.729125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.729133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.729424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.729432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.729509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.729515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.729703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.729710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.730154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.730162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.730353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.730556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.730564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.730743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.730757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.731064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.731072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.731252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.731261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.731450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.731458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.731808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.731816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.732091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.732099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.732299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.732307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.732626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.732635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.732822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.732832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-06 13:54:02.733123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-06 13:54:02.733130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.733418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.733428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.733763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.733772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.734103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.734111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.734413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.734422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.734724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.734732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.735046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.735054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.735336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.735344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.735612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.735620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.735883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.735891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.736219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.736228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.736510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.736519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.736832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.736841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.737144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.737153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.737464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.737472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.737804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.737815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.738134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.738143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.738408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.738416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.738605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.738614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.738794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.738802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.739153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.739161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.739461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.739468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.739769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-06 13:54:02.739779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.683 qpair failed and we were unable to recover it. 00:29:39.683 [2024-11-06 13:54:02.740107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.740116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.740380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.740388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.740706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.740715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.740886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.740897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.741254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.741263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.741562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.741570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.741743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.741756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.742044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.742052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.742350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.742358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.742710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.742718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.743029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.743037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.743340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.743348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.743631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.743639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.743914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.743922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.744231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.744239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.744504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.744513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.744802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.744810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.745160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.745169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.745511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.745520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.745826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.745834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.746136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.746144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.746444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.746452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.746751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.746759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.747034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.747042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.747324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.747333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.747559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.747568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.747877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.747885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.748242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.748251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.748486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.748495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.748800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.748808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.749156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.749164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.749468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.749476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.749812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.749822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.750159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.750168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.750471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.750480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.684 [2024-11-06 13:54:02.750791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.684 [2024-11-06 13:54:02.750799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.684 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.751101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.751109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.751406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.751415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.751721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.751729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.752073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.752081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.752257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.752265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.752587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.752595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.752901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.752909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.753232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.753240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.753525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.753533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.753849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.753857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.754160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.754168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.754357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.754366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.754712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.754720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.755023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.755031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.755209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.755217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.755515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.755523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.755769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.755777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.756076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.756085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.756367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.756376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.756689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.756697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.756990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.756998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.757273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.757282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.757582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.757590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.757903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.757912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.758106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.758115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.758397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.758405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.758714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.758723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.759008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.759016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.759307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.759315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.759627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.759636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.759935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.759944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.760115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.760124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.760462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.760470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.760772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.760780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.761097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.761104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.761374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.761382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.761721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.761731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.685 qpair failed and we were unable to recover it. 00:29:39.685 [2024-11-06 13:54:02.762039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.685 [2024-11-06 13:54:02.762048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.762375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.762384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.762742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.762754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.763073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.763081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.763340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.763347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.763649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.763657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.763958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.763966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.764251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.764260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.764567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.764576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.764882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.764891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.765218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.765226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.765535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.765544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.765719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.765728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.765924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.765933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.766134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.766142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.766452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.766462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.766738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.766752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.767079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.767087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.767400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.767408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.767743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.767755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.768041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.768049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.768213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.768221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.768529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.768537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.768874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.768883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.769067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.769076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.769290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.769298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.769649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.769657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.770088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.770096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.770409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.770417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.770587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.770597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.770924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.770932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.771221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.771230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.771537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.771546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.771716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.771724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.772128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.772137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.772416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.772426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.772753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.772762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.773088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.773097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-06 13:54:02.773281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-06 13:54:02.773289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.773422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.773433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.773768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.773777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.773948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.773956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.774285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.774293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.774578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.774586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.774750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.774759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.775017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.775026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.775174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.775182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.775464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.775473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.775639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.775650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.776011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.776020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.776327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.776335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.776682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.776690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.777049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.777059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.777378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.777388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.777697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.777706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.778021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.778030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.778348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.778357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.778696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.778705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.778894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.778903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.779066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.779074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.779397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.779406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.779567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.779576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.779894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.779903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.780086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.780096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.780258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.780267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.780630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.780638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.780853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.780862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.781189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.781198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.781371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.781379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.781712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.781720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.782022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.782031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.782383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.782392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.782568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.782576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-06 13:54:02.782894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-06 13:54:02.782903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.783208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.783216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.783393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.783401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.783662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.783669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.783980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.783988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.784151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.784158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.784472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.784482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.784784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.784793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.785087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.785094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.785406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.785414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.785699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.785708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.785852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.785860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.786175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.786183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.786356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.786363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.786519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.786526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.786847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.786855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.787153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.787162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.787463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.787472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.787767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.787776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.788094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.788103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.788417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.788425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.788727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.788735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.789029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.789037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.789348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.789357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.789629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.789638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.789820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.789829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.790021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.790030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.790348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.790357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.790658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.790667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.790827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.790836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.791111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.791120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.791464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.791473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.791774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.791782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.791962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.791969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.792135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.792143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-06 13:54:02.792448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-06 13:54:02.792456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.792613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.792621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.792947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.792957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.793135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.793143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.793451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.793460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.793615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.793623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.793802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.793811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.793969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.793976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.794251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.794259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.794574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.794583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.794887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.794895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.795209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.795220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.795560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.795568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.795756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.795764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.795964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.795972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.796285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.796294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.796575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.796584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.796753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.796761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.797078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.797087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.797403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.797413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.797751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.797760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.798072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.798082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.798394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.798403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.798705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.798714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.799005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.799014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.799347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.799356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.799659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.799668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.799860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.799869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.800187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.800196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.800497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.800506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.800819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.800829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.801118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.801128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.801470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.801479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.801789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.801798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.802129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.802138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.802480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.802490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.802824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.802833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.803131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.803139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-06 13:54:02.803450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-06 13:54:02.803459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.803795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.803804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.803988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.803996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.804169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.804178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.804490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.804499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.804770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.804779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.805168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.805176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.805500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.805508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.805819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.805828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.806133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.806142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.806423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.806432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.806752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.806762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.807079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.807088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.807393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.807403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.807724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.807733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.808075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.808085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.808387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.808396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.808551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.808560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.808897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.808906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.809213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.809222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.809524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.809534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.809875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.809884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.810180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.810189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.810495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.810503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.810823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.810833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.811006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.811015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.811317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.811326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.811632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.811641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.811927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.811935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.812237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.812246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.812534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.812542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.812859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.812868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.813168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.813176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.813456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.813464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.813795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.813804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.814088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.814097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.814412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.814422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.814727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.814737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-06 13:54:02.815067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-06 13:54:02.815076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.815265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.815274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.815464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.815473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.815789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.815798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.816089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.816097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.816345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.816354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.816527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.816536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.816908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.816918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.817098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.817107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.817419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.817428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.817740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.817758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.818031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.818040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.818365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.818374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.818711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.818719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.818939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.818947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.819264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.819273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.819589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.819598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.819902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.819911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.820222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.820231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.820537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.820546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.820860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.820868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.821224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.821233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.821459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.821468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.821650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.821659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.821963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.821972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.822301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.822309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.822618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.822627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.822914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.822923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.823111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.823120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.823455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.823463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.823810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.823819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.824133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.824142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.824474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.824483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.824783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.824792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.825097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.825106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.825419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.825428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.825763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.825772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.826069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.826079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-06 13:54:02.826346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-06 13:54:02.826355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.826524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.826533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.826827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.826836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.827149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.827157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.827461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.827472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.827734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.827743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.828034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.828044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.828391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.828400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.828702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.828711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.829023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.829032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.829368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.829377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.829637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.829645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.829933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.829942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.830264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.830273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.830560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.830569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.830882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.830891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.831226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.831234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.831507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.831516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.831699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.831708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.831863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.831872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.832040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.832047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.832342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.832350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.832687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.832695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.832899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.832908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.833238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.833246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.833430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.833439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.833722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.833731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.834049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.834057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.834213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.834224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.834409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.834418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.834698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.834707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.834896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.834904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.835076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-06 13:54:02.835085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-06 13:54:02.835392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.835400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.835685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.835694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.835957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.835966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.836124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.836133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.836509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.836518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.836812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.836821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.837147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.837155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.837426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.837434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.837828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.837836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.838029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.838037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.838203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.838210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.838519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.838529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.838820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.838829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.839144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.839153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.839452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.839461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.839770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.839779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.840092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.840101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.840279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.840288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.840595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.840603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.840643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.840649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.840818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.840827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.841009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.841017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.841333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.841342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.841503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.841512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.841779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.841788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.841945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.841952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.841991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.841998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.842162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.842171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.842352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.842362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.842663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.842671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.843046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.843055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.843336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.843345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.843661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.843669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.843923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.843931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.844102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.844110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.844443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.844451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.844775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.844783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.845126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.845134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-06 13:54:02.845438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-06 13:54:02.845446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.845790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.845798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.846121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.846129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.846462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.846469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.846735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.846743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.846927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.846936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.847114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.847122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.847290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.847298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.847620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.847627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.847823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.847832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.848133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.848142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.848295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.848305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.848582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.848589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.848878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.848888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.849224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.849231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.849443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.849450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.849653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.849662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.849901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.849909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.850090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.850100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.850281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.850290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.850482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.850491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.850672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.850680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.850992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.851000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.851162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.851170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.851360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.851368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.851556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.851572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.851851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.851859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.852173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.852181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.852328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.852335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.852662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.852670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.852852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.852862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.853051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.853058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.853229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.853236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.853425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.853434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.853544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.853553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.853890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.853898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.853993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.854000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.854323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.854330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-06 13:54:02.854640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-06 13:54:02.854648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.854978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.854986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.855298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.855306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.855496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.855505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.855822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.855830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.856012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.856020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.856361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.856369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.856707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.856715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.856993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.857001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.857214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.857221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.857537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.857546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.857882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.857890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.858198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.858205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.858510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.858518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.858837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.858845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.859094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.859105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.859454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.859463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.859781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.859789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.860082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.860090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.860273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.860282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.860478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.860486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.860735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.860743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.861068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.861076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.861382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.861390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.861671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.861678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.861947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.861955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.862280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.862288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.862593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.862601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.862929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.862938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.863280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.863288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.863599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.863607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.863793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.863800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.864081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.864090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.864363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.864371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.864685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.864693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.865007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.865015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.865321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.865329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.865631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.865640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-06 13:54:02.865926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-06 13:54:02.865935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.866240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.866249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.866533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.866541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.866875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.866884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.867206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.867213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.867517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.867525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.867725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.867732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.867948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.867956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.868314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.868322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.868675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.868683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.868904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.868913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.869090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.869097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.869419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.869426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.869756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.869765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.870043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.870050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.870367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.870374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.870700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.870708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.870924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.870934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.871238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.871246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.871518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.871527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.871831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.871839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.872196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.872204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.872374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.872382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.872691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.872699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.873012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.873020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.873306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.873314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.873585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.873593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.873800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.873808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.874130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.874138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.874315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.874324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.874647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.874654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.874961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.874970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.875272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.875280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.875474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.875481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.875803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.875812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.876120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.876127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.876399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.876407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.876742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.876753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-06 13:54:02.877046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-06 13:54:02.877054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.877368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.877377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.877679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.877687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.877942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.877950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.878262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.878269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.878588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.878596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.878906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.878914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.879228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.879236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.879505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.879513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.879806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.879814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:39.697 [2024-11-06 13:54:02.880157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.880166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:39.697 [2024-11-06 13:54:02.880500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.880508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:39.697 [2024-11-06 13:54:02.880809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.880817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:39.697 [2024-11-06 13:54:02.881125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.881133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.697 [2024-11-06 13:54:02.881401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.881409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.881726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.881734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.882052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.882061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.882379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.882389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.882718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.882727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.883030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.883038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.883351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.883360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.883533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.883542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.883828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.883836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.884138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.884147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.884451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.884459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.884632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.884640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.884961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.884969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.885257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.885266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.885581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.885591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.885902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.885910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.886170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.886178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.886468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-06 13:54:02.886477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-06 13:54:02.886668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.886677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.886879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.886887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.887208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.887216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.887501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.887509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.887815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.887823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.888143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.888151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.888452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.888460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.888751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.888760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.889053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.889062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.889376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.889385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.889700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.889709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.889985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.889993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.890332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.890341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.890607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.890615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.890922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.890930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.891229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.891237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.891552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.891560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.891782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.891791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.892037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.892047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.892327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.892336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.892651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.892660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.892956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.892964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.893258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.893267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.893558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.893568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.893773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.893781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.894051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.894063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.894319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.894328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.894617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.894626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.894909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.894918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.895295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.895303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.895485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.895494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.895763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.895772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.896078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.896086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.896135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.896143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.896476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.896484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.896790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.896799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.896978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.896987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.897168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.897175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-06 13:54:02.897475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-06 13:54:02.897484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.897807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.897816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.898139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.898148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.898451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.898461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.898775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.898782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.899069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.899077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.899368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.899377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.899687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.899695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.899882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.899890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.900203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.900213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.900498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.900507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.900815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.900823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.901048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.901057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.901390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.901400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.901690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.901699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.902017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.902026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.902339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.902348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.902673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.902684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.902855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.902864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.903175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.903185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.903509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.903517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.903730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.903739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.904046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.904054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.904363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.904371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.904674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.904683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.905037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.905046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.905370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.905379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.905683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.905693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.906029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.906039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.906349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.906358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.906687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.906696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.907037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.907045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.907348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.907357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.907658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.907667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.908012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.908020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.908352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.908360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.908662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.908670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.908841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-06 13:54:02.908849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-06 13:54:02.909128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.909137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.909442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.909451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.909779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.909787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.910121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.910129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.910422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.910430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.910749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.910759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.911085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.911094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.911395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.911404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.911690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.911700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.912020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.912029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.912332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.912342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.912646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.912654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.912844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.912852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.913167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.913175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.913484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.913493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.913796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.913803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.914149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.914158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.914490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.914498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.914784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.914792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.915127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.915135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.915423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.915430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.915742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.915758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.916006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.916014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.916273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.916280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.916563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.916571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.916990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.916999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.917302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.917311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.917641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.917650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.917956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.917964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.918278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.918288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.918459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.918467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.918778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.918786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.919092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.919100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-06 13:54:02.919436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-06 13:54:02.919444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.919596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.919604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.919904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.919913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.920220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.920227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.920491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.920499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.920802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.920809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.701 [2024-11-06 13:54:02.921121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.921131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:39.701 [2024-11-06 13:54:02.921415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.921424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.701 [2024-11-06 13:54:02.921750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.921760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.701 [2024-11-06 13:54:02.922022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.922032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.922342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.922351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.922639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.922648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.922941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.922949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.923236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.923244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.923429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.923437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.923725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.923733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.924021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.924030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.924204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.924213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.924552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.924561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.924874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.924882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.925206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.925214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.925389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.925398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.925712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.925720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.926016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.926025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.926218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.926227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.926572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.926580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.926888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.926896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.927269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.927276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.927465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.927473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.927732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.927740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.928077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.928085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-06 13:54:02.928417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-06 13:54:02.928425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.928736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.928744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.929078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.929086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.929403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.929413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.929708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.929716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.929998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.930006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.930329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.930338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.930525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.930534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.930821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.930829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.931150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.931158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.931465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.931473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.931796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.931804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.932091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.932099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.932421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.932429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.932593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.932601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.932906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.932916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.933225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.933233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.933550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.933559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.933870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.933878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.934224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.934232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.934568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.934576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.934726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.934734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.935078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.935086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.935277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.935286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.935595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.935603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.935768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.935776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.936092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.936100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.936291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.936300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.936467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.936475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.936753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.936761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.937083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.937091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.937405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.937413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.937752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.937760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.938071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-06 13:54:02.938079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-06 13:54:02.938412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.938420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.938723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.938731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.938927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.938935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.939246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.939255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.939556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.939565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.939909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.939917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.940210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.940219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.940375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.940382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.940688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.940696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.940861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.940872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.941169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.941177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.941487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.941496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.941801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.941810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.942083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.942091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.942372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.942380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.942680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.942689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.942776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.942784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.942985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.942993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.943309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.943318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.943603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.943612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.943915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.943923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.944195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.944203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.944506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.944514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.944800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.944809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.945123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.945132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.945443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.945450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.945785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.945794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.946130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.946139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.946434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.946443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.946779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.946788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.947079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.947087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.947375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.947382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.947675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.947683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.947958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.947968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.948270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.948277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.948573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.948581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.948898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.948907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-06 13:54:02.949078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-06 13:54:02.949086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.949130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.949137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.949327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.949336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.949641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.949649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.949923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.949931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.950096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.950105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.950271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.950278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.950566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.950574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.950879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.950888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.951219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.951227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.951394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.951403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.951717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.951726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.951771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.951782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.952088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.952095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.952273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.952281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.952483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.952491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.952785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.952794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.953099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.953107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.953420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.953428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.953732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.953740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.953916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.953923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.954100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.954108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.954418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.954425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.954727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.954735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.955073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.955081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.955296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.955304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.955513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.955522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.955684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.955692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.956013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.956021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.956192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.956201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.956518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.956527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.956805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.956813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-06 13:54:02.957107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.957115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 Malloc0 00:29:39.704 [2024-11-06 13:54:02.957412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-06 13:54:02.957421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.957640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.957648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.957799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.957807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.958120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.958128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.705 [2024-11-06 13:54:02.958312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.958321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:39.705 [2024-11-06 13:54:02.958648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.958656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.958705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.958711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.958834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.958842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.705 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.705 [2024-11-06 13:54:02.959103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.959111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.959371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.959379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.959684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.959692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.960031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.960040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.960246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.960253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.960573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.960580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.960890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.960898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.961212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.961221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.961513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.961521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.961681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.961690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.961873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.961881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.962070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.962078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.962357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.962365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.962552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.962560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.962875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.962883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.963067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.963076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.963359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.963367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.963647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.963655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.963993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.964001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.964304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.964312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.964602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.964609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.964868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.705 [2024-11-06 13:54:02.964998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.965006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.965304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.965312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.965627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.965636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.965934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.965942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.966141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.966149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.966476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.966484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.966814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.966822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-06 13:54:02.967124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-06 13:54:02.967132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.967427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.967435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.967736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.967744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.968043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.968052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.968356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.968365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.968685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.968693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.968969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.968977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.969281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.969289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.969476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.969484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.969795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.969803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.970106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.970114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.970438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.970446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.970779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.970787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.971111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.971119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.971422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.971430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.971745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.971757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.972085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.972094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.972396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.972404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.972707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.972714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.973016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.973024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.973338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.973346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.973535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.973545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.973717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.973725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.706 [2024-11-06 13:54:02.974037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.974046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:39.706 [2024-11-06 13:54:02.974338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.974347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.706 [2024-11-06 13:54:02.974662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.974671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.706 [2024-11-06 13:54:02.974986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.974994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.975264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.975272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.975624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.975633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.975960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-06 13:54:02.975968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-06 13:54:02.976275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.976282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.976585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.976593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.976908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.976917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.977221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.977229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.977496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.977503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.977818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.977826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.978201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.978209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.978514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.978523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.978810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.978818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.979134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.979141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.979426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.979434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.979757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.979765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.980054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.980062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.980366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.980374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.980581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.980589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.980857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.980866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.981183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.981190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.981505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.981512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.981845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.981853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.982130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.982138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.982309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.982318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.982539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.982547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.982859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.982867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.983211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.983219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.983514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.983523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.983804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.983812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.984150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.984158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.984459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.984466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.984779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.984788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.985074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.985084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.985324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.985331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.985590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.985597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.707 [2024-11-06 13:54:02.985916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-11-06 13:54:02.985927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.986231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:39.707 [2024-11-06 13:54:02.986239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-11-06 13:54:02.986533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.708 [2024-11-06 13:54:02.986541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.708 [2024-11-06 13:54:02.986868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.986876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.987221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.987230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.987588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.987597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.987890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.987899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.988185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.988194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.988498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.988505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.988776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.988784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.989001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.989010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.989278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.989286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.989592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.989600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.989910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.989917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.990234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.990242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.990546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.990554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.990821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.990829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.991143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.991151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.991432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.991440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.991707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.991716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.992040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.992048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.992357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.992365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.992659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.992668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.992986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.992994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.993313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.993320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.993586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.993594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.993964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.993972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.994281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.994288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.994556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.994564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.994869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.994876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.995062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.995071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.995404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.995412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.995573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.995582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.995852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.995860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.996049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.996058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.996281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.996289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.996608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.996618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.996807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.996818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.997085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.997093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-11-06 13:54:02.997384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-11-06 13:54:02.997391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:02.997620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:02.997629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:02.997794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:02.997802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:02.997990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:02.997998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.709 [2024-11-06 13:54:02.998334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:02.998343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.709 [2024-11-06 13:54:02.998580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:02.998588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.709 [2024-11-06 13:54:02.998910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:02.998920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 13:54:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.709 [2024-11-06 13:54:02.999311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:02.999319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:02.999490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:02.999499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:02.999685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:02.999693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.000001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.000010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.000314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.000322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.000504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.000513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.000694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.000702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.001010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.001018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.001199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.001208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.001363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.001371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.001680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.001688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.001998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.002005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.002353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.002361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.002665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.002672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.003013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.003023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.003204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.003213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.003549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.003558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.003860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.003868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.004171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.004179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.004493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.004500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.004828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.004836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.005057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-11-06 13:54:03.005065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbb84000b90 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-11-06 13:54:03.005151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.709 13:54:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.709 13:54:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:39.709 13:54:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.709 13:54:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.709 [2024-11-06 13:54:03.015740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.709 [2024-11-06 13:54:03.015811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.709 [2024-11-06 13:54:03.015825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.709 [2024-11-06 13:54:03.015831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.709 [2024-11-06 13:54:03.015836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.709 [2024-11-06 13:54:03.015851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 13:54:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.709 13:54:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 831482 00:29:39.709 [2024-11-06 13:54:03.025792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.709 [2024-11-06 13:54:03.025847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.709 [2024-11-06 13:54:03.025858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.709 [2024-11-06 13:54:03.025863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.709 [2024-11-06 13:54:03.025868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.710 [2024-11-06 13:54:03.025880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-11-06 13:54:03.035776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.710 [2024-11-06 13:54:03.035828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.710 [2024-11-06 13:54:03.035839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.710 [2024-11-06 13:54:03.035844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.710 [2024-11-06 13:54:03.035849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.710 [2024-11-06 13:54:03.035860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.971 [2024-11-06 13:54:03.045791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.971 [2024-11-06 13:54:03.045850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.971 [2024-11-06 13:54:03.045861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.971 [2024-11-06 13:54:03.045866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.971 [2024-11-06 13:54:03.045871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.971 [2024-11-06 13:54:03.045883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.971 qpair failed and we were unable to recover it. 00:29:39.971 [2024-11-06 13:54:03.055751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.055807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.055817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.055823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.055828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.055839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.065756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.065852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.065865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.065872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.065877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.065888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.075772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.075822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.075833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.075838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.075843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.075854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.085802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.085863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.085872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.085878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.085882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.085893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.095827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.095880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.095890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.095895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.095900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.095910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.105844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.105893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.105903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.105911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.105916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.105927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.115882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.115935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.115945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.115950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.115955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.115966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.125903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.125956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.125966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.125972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.125976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.125987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.135983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.136050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.136059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.136065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.136069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.136080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.145966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.146020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.146029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.146034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.146039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.146052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.155889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.155933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.155943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.155948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.155952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.155963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.165988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.166041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.166050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.166056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.166060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.166070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.176065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.972 [2024-11-06 13:54:03.176128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.972 [2024-11-06 13:54:03.176138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.972 [2024-11-06 13:54:03.176143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.972 [2024-11-06 13:54:03.176148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.972 [2024-11-06 13:54:03.176158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.972 qpair failed and we were unable to recover it. 00:29:39.972 [2024-11-06 13:54:03.186041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.186089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.186099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.186104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.186109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.186120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.196105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.196155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.196165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.196170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.196175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.196185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.206120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.206174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.206183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.206189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.206194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.206204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.216156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.216254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.216264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.216270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.216275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.216285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.226168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.226217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.226227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.226232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.226237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.226248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.236291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.236345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.236355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.236363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.236367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.236378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.246269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.246324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.246334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.246339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.246344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.246354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.256202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.256254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.256263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.256269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.256273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.256284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.266335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.266382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.266391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.266397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.266402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.266412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.276280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.276332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.276342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.276347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.276351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.276364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.286267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.286320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.286331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.286336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.286341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.286352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.296355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.296409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.296419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.296424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.296429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.296439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.306400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.973 [2024-11-06 13:54:03.306448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.973 [2024-11-06 13:54:03.306457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.973 [2024-11-06 13:54:03.306463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.973 [2024-11-06 13:54:03.306467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.973 [2024-11-06 13:54:03.306478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.973 qpair failed and we were unable to recover it. 00:29:39.973 [2024-11-06 13:54:03.316428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.974 [2024-11-06 13:54:03.316479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.974 [2024-11-06 13:54:03.316488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.974 [2024-11-06 13:54:03.316493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.974 [2024-11-06 13:54:03.316498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.974 [2024-11-06 13:54:03.316508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.974 qpair failed and we were unable to recover it. 00:29:39.974 [2024-11-06 13:54:03.326464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.974 [2024-11-06 13:54:03.326539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.974 [2024-11-06 13:54:03.326548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.974 [2024-11-06 13:54:03.326554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.974 [2024-11-06 13:54:03.326559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.974 [2024-11-06 13:54:03.326569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.974 qpair failed and we were unable to recover it. 00:29:39.974 [2024-11-06 13:54:03.336507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.974 [2024-11-06 13:54:03.336560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.974 [2024-11-06 13:54:03.336570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.974 [2024-11-06 13:54:03.336575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.974 [2024-11-06 13:54:03.336580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:39.974 [2024-11-06 13:54:03.336591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.974 qpair failed and we were unable to recover it. 00:29:40.235 [2024-11-06 13:54:03.346528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.235 [2024-11-06 13:54:03.346579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.235 [2024-11-06 13:54:03.346589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.235 [2024-11-06 13:54:03.346594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.235 [2024-11-06 13:54:03.346599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.235 [2024-11-06 13:54:03.346609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.235 qpair failed and we were unable to recover it. 00:29:40.235 [2024-11-06 13:54:03.356410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.235 [2024-11-06 13:54:03.356459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.235 [2024-11-06 13:54:03.356469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.235 [2024-11-06 13:54:03.356474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.356479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.356489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.366572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.366623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.366636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.366641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.366646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.366656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.376594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.376681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.376692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.376697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.376702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.376713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.386636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.386732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.386743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.386752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.386756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.386767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.396660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.396717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.396726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.396732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.396736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.396751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.406688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.406763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.406773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.406778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.406785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.406796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.416741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.416795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.416805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.416810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.416814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.416825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.426707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.426756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.426766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.426771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.426776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.426786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.436787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.436839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.436848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.436853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.436858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.436868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.446700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.446756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.446766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.446771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.446776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.446786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.456842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.456937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.456947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.456953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.456958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.456969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.466724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.466781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.466790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.466796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.466800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.466811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.236 [2024-11-06 13:54:03.476869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.236 [2024-11-06 13:54:03.476918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.236 [2024-11-06 13:54:03.476928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.236 [2024-11-06 13:54:03.476933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.236 [2024-11-06 13:54:03.476938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.236 [2024-11-06 13:54:03.476948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.236 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.486908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.486962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.486972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.486977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.486982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.486992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.496947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.496998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.497010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.497016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.497020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.497031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.506997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.507076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.507085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.507091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.507095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.507106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.516987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.517040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.517050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.517055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.517060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.517070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.527029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.527080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.527090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.527095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.527100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.527111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.537068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.537171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.537181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.537186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.537194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.537205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.546965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.547033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.547042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.547047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.547052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.547063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.557106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.557157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.557167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.557172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.557176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.557187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.567134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.567185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.567194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.567199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.567204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.567214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.577180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.577253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.577263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.577269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.577273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.577284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.587078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.587125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.587135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.587140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.587144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.587155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.597218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.597268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.597278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.237 [2024-11-06 13:54:03.597283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.237 [2024-11-06 13:54:03.597288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.237 [2024-11-06 13:54:03.597298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.237 qpair failed and we were unable to recover it. 00:29:40.237 [2024-11-06 13:54:03.607263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.237 [2024-11-06 13:54:03.607311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.237 [2024-11-06 13:54:03.607321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.238 [2024-11-06 13:54:03.607326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.238 [2024-11-06 13:54:03.607331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.238 [2024-11-06 13:54:03.607341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.238 qpair failed and we were unable to recover it. 00:29:40.500 [2024-11-06 13:54:03.617298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.500 [2024-11-06 13:54:03.617355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.500 [2024-11-06 13:54:03.617364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.500 [2024-11-06 13:54:03.617369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.500 [2024-11-06 13:54:03.617374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.500 [2024-11-06 13:54:03.617385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.500 qpair failed and we were unable to recover it. 00:29:40.500 [2024-11-06 13:54:03.627308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.500 [2024-11-06 13:54:03.627360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.500 [2024-11-06 13:54:03.627370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.500 [2024-11-06 13:54:03.627375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.500 [2024-11-06 13:54:03.627380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.500 [2024-11-06 13:54:03.627391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.500 qpair failed and we were unable to recover it. 00:29:40.500 [2024-11-06 13:54:03.637210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.500 [2024-11-06 13:54:03.637259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.500 [2024-11-06 13:54:03.637270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.500 [2024-11-06 13:54:03.637276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.500 [2024-11-06 13:54:03.637280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.500 [2024-11-06 13:54:03.637291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.500 qpair failed and we were unable to recover it. 00:29:40.500 [2024-11-06 13:54:03.647373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.500 [2024-11-06 13:54:03.647427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.500 [2024-11-06 13:54:03.647437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.500 [2024-11-06 13:54:03.647442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.500 [2024-11-06 13:54:03.647447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.500 [2024-11-06 13:54:03.647457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.500 qpair failed and we were unable to recover it. 00:29:40.500 [2024-11-06 13:54:03.657389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.500 [2024-11-06 13:54:03.657437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.500 [2024-11-06 13:54:03.657446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.500 [2024-11-06 13:54:03.657451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.500 [2024-11-06 13:54:03.657456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.500 [2024-11-06 13:54:03.657467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.500 qpair failed and we were unable to recover it. 00:29:40.500 [2024-11-06 13:54:03.667432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.500 [2024-11-06 13:54:03.667481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.500 [2024-11-06 13:54:03.667490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.500 [2024-11-06 13:54:03.667498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.500 [2024-11-06 13:54:03.667503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.500 [2024-11-06 13:54:03.667514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.500 qpair failed and we were unable to recover it. 00:29:40.500 [2024-11-06 13:54:03.677402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.500 [2024-11-06 13:54:03.677457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.500 [2024-11-06 13:54:03.677477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.500 [2024-11-06 13:54:03.677482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.500 [2024-11-06 13:54:03.677487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.500 [2024-11-06 13:54:03.677502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.687405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.687468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.687478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.687484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.687489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.687499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.697518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.697567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.697577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.697582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.697586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.697597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.707541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.707588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.707598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.707603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.707608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.707621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.717548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.717595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.717605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.717610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.717615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.717625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.727560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.727617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.727627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.727632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.727637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.727647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.737619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.737670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.737681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.737686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.737691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.737702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.747635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.747683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.747694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.747699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.747704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.747715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.757665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.757723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.757732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.757738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.757743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.757757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.767712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.767802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.767812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.767818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.767822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.767834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.777723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.777776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.777786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.777791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.777796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.777807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.787744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.787798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.787808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.787813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.787817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.787828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.797741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.797835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.797844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.797854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.797858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.501 [2024-11-06 13:54:03.797869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.501 qpair failed and we were unable to recover it. 00:29:40.501 [2024-11-06 13:54:03.807852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.501 [2024-11-06 13:54:03.807906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.501 [2024-11-06 13:54:03.807915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.501 [2024-11-06 13:54:03.807921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.501 [2024-11-06 13:54:03.807925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.502 [2024-11-06 13:54:03.807936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.502 qpair failed and we were unable to recover it. 00:29:40.502 [2024-11-06 13:54:03.817778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.502 [2024-11-06 13:54:03.817829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.502 [2024-11-06 13:54:03.817839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.502 [2024-11-06 13:54:03.817844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.502 [2024-11-06 13:54:03.817849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.502 [2024-11-06 13:54:03.817859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.502 qpair failed and we were unable to recover it. 00:29:40.502 [2024-11-06 13:54:03.827845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.502 [2024-11-06 13:54:03.827893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.502 [2024-11-06 13:54:03.827902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.502 [2024-11-06 13:54:03.827908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.502 [2024-11-06 13:54:03.827913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.502 [2024-11-06 13:54:03.827923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.502 qpair failed and we were unable to recover it. 00:29:40.502 [2024-11-06 13:54:03.837857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.502 [2024-11-06 13:54:03.837906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.502 [2024-11-06 13:54:03.837915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.502 [2024-11-06 13:54:03.837921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.502 [2024-11-06 13:54:03.837925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.502 [2024-11-06 13:54:03.837938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.502 qpair failed and we were unable to recover it. 00:29:40.502 [2024-11-06 13:54:03.847775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.502 [2024-11-06 13:54:03.847825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.502 [2024-11-06 13:54:03.847835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.502 [2024-11-06 13:54:03.847840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.502 [2024-11-06 13:54:03.847845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.502 [2024-11-06 13:54:03.847855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.502 qpair failed and we were unable to recover it. 00:29:40.502 [2024-11-06 13:54:03.857901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.502 [2024-11-06 13:54:03.857952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.502 [2024-11-06 13:54:03.857961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.502 [2024-11-06 13:54:03.857967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.502 [2024-11-06 13:54:03.857971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.502 [2024-11-06 13:54:03.857982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.502 qpair failed and we were unable to recover it. 00:29:40.502 [2024-11-06 13:54:03.867980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.502 [2024-11-06 13:54:03.868030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.502 [2024-11-06 13:54:03.868039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.502 [2024-11-06 13:54:03.868045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.502 [2024-11-06 13:54:03.868050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.502 [2024-11-06 13:54:03.868060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.502 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.877987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.878037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.878047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.878052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.878057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.878068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.888015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.888068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.888078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.888083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.888088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.888098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.898057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.898111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.898121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.898126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.898131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.898141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.908069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.908114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.908124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.908130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.908135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.908145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.918098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.918151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.918161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.918166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.918171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.918182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.928045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.928137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.928151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.928158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.928163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.928176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.938185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.938236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.938246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.938252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.938257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.938268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.948204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.948277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.948286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.948292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.948296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.948307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.958215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.958263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.958273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.958278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.958283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.958293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.968250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.968305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.968317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.779 [2024-11-06 13:54:03.968323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.779 [2024-11-06 13:54:03.968331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.779 [2024-11-06 13:54:03.968344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.779 qpair failed and we were unable to recover it. 00:29:40.779 [2024-11-06 13:54:03.978323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.779 [2024-11-06 13:54:03.978419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.779 [2024-11-06 13:54:03.978429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:03.978435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:03.978439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:03.978450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:03.988308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:03.988357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:03.988366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:03.988372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:03.988377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:03.988387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:03.998295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:03.998340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:03.998350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:03.998355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:03.998360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:03.998371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.008350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.008406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.008416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.008421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.008426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.008436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.018413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.018464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.018474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.018479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.018484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.018495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.028426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.028475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.028485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.028490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.028495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.028505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.038341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.038437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.038448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.038453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.038458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.038469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.048492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.048543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.048553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.048558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.048563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.048573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.058525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.058579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.058595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.058600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.058605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.058617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.068531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.068614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.068625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.068630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.068635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.068646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.078569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.078622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.078632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.078637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.078642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.078653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.088620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.088707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.088717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.088722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.088728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.088739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.098639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.780 [2024-11-06 13:54:04.098734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.780 [2024-11-06 13:54:04.098745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.780 [2024-11-06 13:54:04.098754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.780 [2024-11-06 13:54:04.098762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.780 [2024-11-06 13:54:04.098773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.780 qpair failed and we were unable to recover it. 00:29:40.780 [2024-11-06 13:54:04.108644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.781 [2024-11-06 13:54:04.108695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.781 [2024-11-06 13:54:04.108704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.781 [2024-11-06 13:54:04.108709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.781 [2024-11-06 13:54:04.108714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.781 [2024-11-06 13:54:04.108725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.781 qpair failed and we were unable to recover it. 00:29:40.781 [2024-11-06 13:54:04.118679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.781 [2024-11-06 13:54:04.118731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.781 [2024-11-06 13:54:04.118741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.781 [2024-11-06 13:54:04.118749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.781 [2024-11-06 13:54:04.118754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.781 [2024-11-06 13:54:04.118765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.781 qpair failed and we were unable to recover it. 00:29:40.781 [2024-11-06 13:54:04.128696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.781 [2024-11-06 13:54:04.128787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.781 [2024-11-06 13:54:04.128797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.781 [2024-11-06 13:54:04.128802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.781 [2024-11-06 13:54:04.128807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.781 [2024-11-06 13:54:04.128818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.781 qpair failed and we were unable to recover it. 00:29:40.781 [2024-11-06 13:54:04.138750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.781 [2024-11-06 13:54:04.138804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.781 [2024-11-06 13:54:04.138814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.781 [2024-11-06 13:54:04.138819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.781 [2024-11-06 13:54:04.138823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.781 [2024-11-06 13:54:04.138834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.781 qpair failed and we were unable to recover it. 00:29:40.781 [2024-11-06 13:54:04.148750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.781 [2024-11-06 13:54:04.148797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.781 [2024-11-06 13:54:04.148807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.781 [2024-11-06 13:54:04.148812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.781 [2024-11-06 13:54:04.148817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:40.781 [2024-11-06 13:54:04.148827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.781 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.158794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.158878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.158888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.158893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.158898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.158908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.168820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.168905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.168915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.168920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.168925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.168935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.178845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.178904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.178913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.178919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.178923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.178934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.188852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.188905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.188916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.188921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.188925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.188936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.198888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.198939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.198949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.198954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.198959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.198969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.208918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.208990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.209000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.209005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.209009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.209020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.218935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.218988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.218997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.219003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.219008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.219018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.228902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.228961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.228971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.228979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.228983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.228994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.238989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.239039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.239050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.239055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.239060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.239070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.249013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.249061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.249070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.249076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.249080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.249090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.259043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.259099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.259108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.259114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.259118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.044 [2024-11-06 13:54:04.259128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.044 qpair failed and we were unable to recover it. 00:29:41.044 [2024-11-06 13:54:04.269116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.044 [2024-11-06 13:54:04.269200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.044 [2024-11-06 13:54:04.269210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.044 [2024-11-06 13:54:04.269215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.044 [2024-11-06 13:54:04.269220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.269234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.279122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.279169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.279179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.279184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.279189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.279200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.289135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.289222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.289232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.289238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.289243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.289254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.299196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.299285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.299294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.299300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.299305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.299315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.309188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.309256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.309266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.309271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.309275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.309286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.319111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.319168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.319179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.319185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.319189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.319200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.329239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.329290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.329300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.329305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.329310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.329320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.339304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.339355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.339364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.339370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.339374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.339385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.349294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.349338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.349348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.349353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.349358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.349368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.359206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.359260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.359272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.359277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.359282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.359292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.369372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.369433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.369443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.369448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.369453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.369463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.379388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.379439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.379450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.379455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.379459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.379470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.389424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.389473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.389482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.389488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.389492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.389502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.045 [2024-11-06 13:54:04.399448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.045 [2024-11-06 13:54:04.399497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.045 [2024-11-06 13:54:04.399507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.045 [2024-11-06 13:54:04.399512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.045 [2024-11-06 13:54:04.399517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.045 [2024-11-06 13:54:04.399530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.045 qpair failed and we were unable to recover it. 00:29:41.046 [2024-11-06 13:54:04.409467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.046 [2024-11-06 13:54:04.409536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.046 [2024-11-06 13:54:04.409546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.046 [2024-11-06 13:54:04.409551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.046 [2024-11-06 13:54:04.409556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.046 [2024-11-06 13:54:04.409566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.046 qpair failed and we were unable to recover it. 00:29:41.308 [2024-11-06 13:54:04.419513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.308 [2024-11-06 13:54:04.419563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.308 [2024-11-06 13:54:04.419573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.308 [2024-11-06 13:54:04.419578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.308 [2024-11-06 13:54:04.419583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.308 [2024-11-06 13:54:04.419593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.308 qpair failed and we were unable to recover it. 00:29:41.308 [2024-11-06 13:54:04.429503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.308 [2024-11-06 13:54:04.429550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.308 [2024-11-06 13:54:04.429559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.308 [2024-11-06 13:54:04.429564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.308 [2024-11-06 13:54:04.429569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.308 [2024-11-06 13:54:04.429580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.308 qpair failed and we were unable to recover it. 00:29:41.308 [2024-11-06 13:54:04.439541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.308 [2024-11-06 13:54:04.439599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.308 [2024-11-06 13:54:04.439609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.308 [2024-11-06 13:54:04.439614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.308 [2024-11-06 13:54:04.439619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.308 [2024-11-06 13:54:04.439629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.308 qpair failed and we were unable to recover it. 00:29:41.308 [2024-11-06 13:54:04.449576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.308 [2024-11-06 13:54:04.449628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.308 [2024-11-06 13:54:04.449638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.308 [2024-11-06 13:54:04.449643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.308 [2024-11-06 13:54:04.449648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.449658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.459634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.459687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.459697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.459702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.459706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.459716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.469624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.469671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.469681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.469686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.469691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.469701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.479672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.479723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.479733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.479738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.479743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.479757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.489693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.489750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.489762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.489768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.489772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.489783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.499723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.499784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.499795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.499800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.499805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.499816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.509736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.509791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.509801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.509806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.509811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.509822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.519748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.519796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.519806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.519811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.519816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.519826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.529802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.529855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.529865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.529870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.529877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.529888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.539833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.539887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.539898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.539904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.539909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.539920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.549870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.549921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.549930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.549936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.549940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.549951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.559866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.559914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.559923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.559929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.559933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.559944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.569882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.569938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.569948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.569953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.569957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.309 [2024-11-06 13:54:04.569968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.309 qpair failed and we were unable to recover it. 00:29:41.309 [2024-11-06 13:54:04.579947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.309 [2024-11-06 13:54:04.579995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.309 [2024-11-06 13:54:04.580005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.309 [2024-11-06 13:54:04.580010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.309 [2024-11-06 13:54:04.580014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.580024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.589944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.589992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.590001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.590007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.590012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.590022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.599980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.600030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.600040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.600045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.600049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.600060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.610030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.610080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.610089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.610095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.610099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.610109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.620049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.620100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.620112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.620117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.620122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.620132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.630086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.630133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.630143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.630148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.630153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.630163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.640110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.640157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.640167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.640172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.640177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.640187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.650136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.650184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.650193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.650199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.650203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.650213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.660198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.660249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.660258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.660263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.660270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.660281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.670197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.670247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.670257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.670262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.670267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.670277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.310 [2024-11-06 13:54:04.680241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.310 [2024-11-06 13:54:04.680287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.310 [2024-11-06 13:54:04.680297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.310 [2024-11-06 13:54:04.680302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.310 [2024-11-06 13:54:04.680307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.310 [2024-11-06 13:54:04.680317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.310 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.690134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.690226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.690235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.690241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.690246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.690257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.700289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.700339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.700349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.700354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.700359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.700369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.710304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.710358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.710368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.710373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.710378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.710388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.720333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.720425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.720435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.720441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.720446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.720456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.730369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.730423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.730433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.730439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.730443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.730454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.740355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.740412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.740423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.740428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.740433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.740443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.750452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.750516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.750526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.750532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.750536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.750547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.760447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.760539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.760548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.760555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.760560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.760570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.770503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.770581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.770591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.770596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.770600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.770611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.780383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.780434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.780444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.780449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.780453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.780464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.790500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.790549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.573 [2024-11-06 13:54:04.790559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.573 [2024-11-06 13:54:04.790567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.573 [2024-11-06 13:54:04.790572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.573 [2024-11-06 13:54:04.790582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-11-06 13:54:04.800601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.573 [2024-11-06 13:54:04.800670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.800680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.800685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.800690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.800700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.810591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.810639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.810649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.810654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.810660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.810671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.820639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.820686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.820696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.820701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.820706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.820716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.830686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.830731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.830741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.830750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.830755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.830769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.840689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.840741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.840755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.840760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.840765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.840776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.850698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.850750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.850762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.850767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.850772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.850783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.860695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.860744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.860758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.860763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.860768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.860779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.870764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.870810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.870820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.870826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.870831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.870841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.880796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.880887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.880897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.880902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.880907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.880918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.890826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.890879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.890889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.890894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.890898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.890909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.900868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.900918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.900928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.900933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.900938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.900948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.910751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.910812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.910822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.910827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.910832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.910843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.920952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.921007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.921019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.574 [2024-11-06 13:54:04.921024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.574 [2024-11-06 13:54:04.921029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.574 [2024-11-06 13:54:04.921040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-11-06 13:54:04.930940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.574 [2024-11-06 13:54:04.930992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.574 [2024-11-06 13:54:04.931002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.575 [2024-11-06 13:54:04.931007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.575 [2024-11-06 13:54:04.931012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.575 [2024-11-06 13:54:04.931023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-11-06 13:54:04.940962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.575 [2024-11-06 13:54:04.941021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.575 [2024-11-06 13:54:04.941030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.575 [2024-11-06 13:54:04.941036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.575 [2024-11-06 13:54:04.941040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.575 [2024-11-06 13:54:04.941051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 13:54:04.950987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-11-06 13:54:04.951033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-11-06 13:54:04.951043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-11-06 13:54:04.951048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-11-06 13:54:04.951053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.838 [2024-11-06 13:54:04.951063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 13:54:04.961009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-11-06 13:54:04.961056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-11-06 13:54:04.961065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-11-06 13:54:04.961071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-11-06 13:54:04.961075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.838 [2024-11-06 13:54:04.961092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 13:54:04.971030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-11-06 13:54:04.971079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-11-06 13:54:04.971089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-11-06 13:54:04.971094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-11-06 13:54:04.971098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.838 [2024-11-06 13:54:04.971109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 13:54:04.981030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-11-06 13:54:04.981079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-11-06 13:54:04.981089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-11-06 13:54:04.981094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-11-06 13:54:04.981098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.838 [2024-11-06 13:54:04.981109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 13:54:04.991096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-11-06 13:54:04.991164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-11-06 13:54:04.991174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-11-06 13:54:04.991179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-11-06 13:54:04.991184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.838 [2024-11-06 13:54:04.991195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 13:54:05.000991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-11-06 13:54:05.001038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-11-06 13:54:05.001048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-11-06 13:54:05.001053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-11-06 13:54:05.001058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.838 [2024-11-06 13:54:05.001068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 13:54:05.011155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-11-06 13:54:05.011240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-11-06 13:54:05.011250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-11-06 13:54:05.011255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.011260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.011270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.021179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.021274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.021284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.021290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.021294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.021305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.031166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.031214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.031224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.031230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.031234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.031245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.041228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.041273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.041283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.041288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.041293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.041304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.051218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.051270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.051283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.051289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.051293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.051304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.061268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.061318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.061328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.061333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.061338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.061349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.071302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.071350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.071360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.071366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.071370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.071381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.081324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.081376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.081385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.081391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.081395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.081406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.091380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.091427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.091437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.091442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.091450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.091460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.101367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.101418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.101428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.101433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.101438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.101448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.111307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.111353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.111363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.111368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.111373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.111383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 13:54:05.121429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-11-06 13:54:05.121517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-11-06 13:54:05.121527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-11-06 13:54:05.121534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-11-06 13:54:05.121538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.839 [2024-11-06 13:54:05.121549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 13:54:05.131464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.840 [2024-11-06 13:54:05.131522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.840 [2024-11-06 13:54:05.131541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.840 [2024-11-06 13:54:05.131547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.840 [2024-11-06 13:54:05.131552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.840 [2024-11-06 13:54:05.131567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 13:54:05.141486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.840 [2024-11-06 13:54:05.141541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.840 [2024-11-06 13:54:05.141560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.840 [2024-11-06 13:54:05.141566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.840 [2024-11-06 13:54:05.141571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.840 [2024-11-06 13:54:05.141586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 13:54:05.151511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.840 [2024-11-06 13:54:05.151561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.840 [2024-11-06 13:54:05.151580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.840 [2024-11-06 13:54:05.151586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.840 [2024-11-06 13:54:05.151591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.840 [2024-11-06 13:54:05.151606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 13:54:05.161520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.840 [2024-11-06 13:54:05.161572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.840 [2024-11-06 13:54:05.161583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.840 [2024-11-06 13:54:05.161589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.840 [2024-11-06 13:54:05.161594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.840 [2024-11-06 13:54:05.161605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 13:54:05.171580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.840 [2024-11-06 13:54:05.171632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.840 [2024-11-06 13:54:05.171643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.840 [2024-11-06 13:54:05.171648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.840 [2024-11-06 13:54:05.171653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.840 [2024-11-06 13:54:05.171664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 13:54:05.181628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.840 [2024-11-06 13:54:05.181679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.840 [2024-11-06 13:54:05.181693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.840 [2024-11-06 13:54:05.181699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.840 [2024-11-06 13:54:05.181703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.840 [2024-11-06 13:54:05.181714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 13:54:05.191603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.840 [2024-11-06 13:54:05.191654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.840 [2024-11-06 13:54:05.191664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.840 [2024-11-06 13:54:05.191669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.840 [2024-11-06 13:54:05.191673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.840 [2024-11-06 13:54:05.191684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 13:54:05.201639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.840 [2024-11-06 13:54:05.201700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.840 [2024-11-06 13:54:05.201710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.840 [2024-11-06 13:54:05.201715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.840 [2024-11-06 13:54:05.201719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:41.840 [2024-11-06 13:54:05.201730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.840 qpair failed and we were unable to recover it. 00:29:42.104 [2024-11-06 13:54:05.211701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.104 [2024-11-06 13:54:05.211758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.104 [2024-11-06 13:54:05.211769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.104 [2024-11-06 13:54:05.211774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.104 [2024-11-06 13:54:05.211779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.104 [2024-11-06 13:54:05.211790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.104 qpair failed and we were unable to recover it. 00:29:42.104 [2024-11-06 13:54:05.221708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.104 [2024-11-06 13:54:05.221760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.104 [2024-11-06 13:54:05.221770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.104 [2024-11-06 13:54:05.221779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.104 [2024-11-06 13:54:05.221784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.104 [2024-11-06 13:54:05.221795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.104 qpair failed and we were unable to recover it. 00:29:42.104 [2024-11-06 13:54:05.231713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.104 [2024-11-06 13:54:05.231762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.104 [2024-11-06 13:54:05.231772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.104 [2024-11-06 13:54:05.231778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.104 [2024-11-06 13:54:05.231783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.104 [2024-11-06 13:54:05.231793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.104 qpair failed and we were unable to recover it. 00:29:42.104 [2024-11-06 13:54:05.241866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.104 [2024-11-06 13:54:05.241925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.241935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.241940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.241945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.241956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.251860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.251915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.251925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.251930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.251935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.251945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.261762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.261847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.261856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.261862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.261866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.261877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.271892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.271944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.271954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.271959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.271964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.271975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.281887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.281938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.281949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.281954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.281959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.281970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.291917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.291966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.291976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.291981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.291986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.291996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.302064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.302147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.302157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.302163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.302167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.302179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.311975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.312032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.312042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.312047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.312052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.312062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.321989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.322035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.322045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.322050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.322055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.322065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.331956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.332010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.332020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.332025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.332030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.332041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.342082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.342163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.342173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.342179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.342183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.342194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.352084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.352129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.352138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.352147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.352151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.352162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.362123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-11-06 13:54:05.362175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-11-06 13:54:05.362184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-11-06 13:54:05.362190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-11-06 13:54:05.362194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.105 [2024-11-06 13:54:05.362205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-11-06 13:54:05.372124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.372177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.372187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.372192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.372197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.372207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.382193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.382242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.382252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.382258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.382262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.382273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.392185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.392231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.392240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.392246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.392250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.392263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.402232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.402280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.402290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.402295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.402300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.402310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.412267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.412317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.412327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.412332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.412336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.412347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.422284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.422329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.422339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.422344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.422349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.422359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.432308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.432352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.432362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.432367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.432372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.432382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.442305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.442351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.442361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.442366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.442370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.442381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.452329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.452382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.452392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.452397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.452402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.452412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.462403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.462456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.462466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.462471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.462476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.462486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-11-06 13:54:05.472387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-11-06 13:54:05.472441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-11-06 13:54:05.472451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-11-06 13:54:05.472456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-11-06 13:54:05.472461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.106 [2024-11-06 13:54:05.472471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.369 [2024-11-06 13:54:05.482444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.369 [2024-11-06 13:54:05.482492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.369 [2024-11-06 13:54:05.482514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.369 [2024-11-06 13:54:05.482521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.369 [2024-11-06 13:54:05.482526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.369 [2024-11-06 13:54:05.482541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.369 qpair failed and we were unable to recover it. 00:29:42.369 [2024-11-06 13:54:05.492451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.369 [2024-11-06 13:54:05.492507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.369 [2024-11-06 13:54:05.492525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.369 [2024-11-06 13:54:05.492531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.369 [2024-11-06 13:54:05.492536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.369 [2024-11-06 13:54:05.492551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.369 qpair failed and we were unable to recover it. 00:29:42.369 [2024-11-06 13:54:05.502393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.369 [2024-11-06 13:54:05.502448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.369 [2024-11-06 13:54:05.502460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.369 [2024-11-06 13:54:05.502465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.369 [2024-11-06 13:54:05.502470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.369 [2024-11-06 13:54:05.502481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.369 qpair failed and we were unable to recover it. 00:29:42.369 [2024-11-06 13:54:05.512412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.369 [2024-11-06 13:54:05.512460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.369 [2024-11-06 13:54:05.512470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.369 [2024-11-06 13:54:05.512476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.369 [2024-11-06 13:54:05.512480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.369 [2024-11-06 13:54:05.512491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.369 qpair failed and we were unable to recover it. 00:29:42.369 [2024-11-06 13:54:05.522577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.369 [2024-11-06 13:54:05.522628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.369 [2024-11-06 13:54:05.522647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.369 [2024-11-06 13:54:05.522653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.369 [2024-11-06 13:54:05.522662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.369 [2024-11-06 13:54:05.522676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.532591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.532645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.532656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.532662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.532667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.532678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.542620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.542676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.542686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.542691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.542696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.542707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.552628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.552671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.552680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.552686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.552690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.552701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.562576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.562623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.562633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.562638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.562643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.562653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.572706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.572759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.572769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.572774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.572779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.572790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.582739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.582794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.582803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.582809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.582814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.582824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.592775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.592836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.592846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.592851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.592856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.592866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.602787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.602841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.602851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.602856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.602860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.602871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.612818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.612869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.612881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.612886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.612891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.612902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.622724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.622778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.622788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.622793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.622797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.622808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.632870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.632921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.632931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.632937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.632941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.632952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.642901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.642951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.642961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.642966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.370 [2024-11-06 13:54:05.642971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.370 [2024-11-06 13:54:05.642981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.370 qpair failed and we were unable to recover it. 00:29:42.370 [2024-11-06 13:54:05.652924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.370 [2024-11-06 13:54:05.652972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.370 [2024-11-06 13:54:05.652981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.370 [2024-11-06 13:54:05.652987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.371 [2024-11-06 13:54:05.652994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.371 [2024-11-06 13:54:05.653005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.371 qpair failed and we were unable to recover it. 00:29:42.371 [2024-11-06 13:54:05.662947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.371 [2024-11-06 13:54:05.663000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.371 [2024-11-06 13:54:05.663010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.371 [2024-11-06 13:54:05.663016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.371 [2024-11-06 13:54:05.663020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.371 [2024-11-06 13:54:05.663031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.371 qpair failed and we were unable to recover it. 00:29:42.371 [2024-11-06 13:54:05.672999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.371 [2024-11-06 13:54:05.673050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.371 [2024-11-06 13:54:05.673060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.371 [2024-11-06 13:54:05.673066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.371 [2024-11-06 13:54:05.673071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.371 [2024-11-06 13:54:05.673081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.371 qpair failed and we were unable to recover it. 00:29:42.371 [2024-11-06 13:54:05.682992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.371 [2024-11-06 13:54:05.683036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.371 [2024-11-06 13:54:05.683046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.371 [2024-11-06 13:54:05.683051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.371 [2024-11-06 13:54:05.683055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.371 [2024-11-06 13:54:05.683066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.371 qpair failed and we were unable to recover it. 00:29:42.371 [2024-11-06 13:54:05.693036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.371 [2024-11-06 13:54:05.693082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.371 [2024-11-06 13:54:05.693091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.371 [2024-11-06 13:54:05.693096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.371 [2024-11-06 13:54:05.693101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.371 [2024-11-06 13:54:05.693112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.371 qpair failed and we were unable to recover it. 00:29:42.371 [2024-11-06 13:54:05.703067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.371 [2024-11-06 13:54:05.703155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.371 [2024-11-06 13:54:05.703165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.371 [2024-11-06 13:54:05.703170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.371 [2024-11-06 13:54:05.703176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.371 [2024-11-06 13:54:05.703186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.371 qpair failed and we were unable to recover it. 00:29:42.371 [2024-11-06 13:54:05.713067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.371 [2024-11-06 13:54:05.713119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.371 [2024-11-06 13:54:05.713128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.371 [2024-11-06 13:54:05.713133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.371 [2024-11-06 13:54:05.713138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.371 [2024-11-06 13:54:05.713148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.371 qpair failed and we were unable to recover it. 00:29:42.371 [2024-11-06 13:54:05.723004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.371 [2024-11-06 13:54:05.723050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.371 [2024-11-06 13:54:05.723062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.371 [2024-11-06 13:54:05.723068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.371 [2024-11-06 13:54:05.723072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.371 [2024-11-06 13:54:05.723083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.371 qpair failed and we were unable to recover it. 00:29:42.371 [2024-11-06 13:54:05.733136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.371 [2024-11-06 13:54:05.733188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.371 [2024-11-06 13:54:05.733198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.371 [2024-11-06 13:54:05.733204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.371 [2024-11-06 13:54:05.733208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.371 [2024-11-06 13:54:05.733219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.371 qpair failed and we were unable to recover it. 00:29:42.634 [2024-11-06 13:54:05.743154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.634 [2024-11-06 13:54:05.743241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.634 [2024-11-06 13:54:05.743253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.634 [2024-11-06 13:54:05.743259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.634 [2024-11-06 13:54:05.743264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.634 [2024-11-06 13:54:05.743275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.634 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.753143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.753187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.753196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.753201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.753206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.753217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.763175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.763222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.763232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.763237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.763242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.763253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.773228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.773301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.773311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.773316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.773320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.773331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.783269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.783319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.783328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.783336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.783341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.783351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.793295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.793342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.793352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.793357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.793362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.793372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.803331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.803377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.803387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.803392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.803397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.803407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.813268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.813359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.813368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.813374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.813379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.813390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.823413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.823467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.823476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.823481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.823486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.823496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.833419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.833468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.833478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.833483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.833488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.833499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.843462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.843518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.843536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.843542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.843547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.843562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.853474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.853572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.853592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.853598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.853603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.853618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.863514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.863566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.635 [2024-11-06 13:54:05.863578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.635 [2024-11-06 13:54:05.863583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.635 [2024-11-06 13:54:05.863588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.635 [2024-11-06 13:54:05.863600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.635 qpair failed and we were unable to recover it. 00:29:42.635 [2024-11-06 13:54:05.873538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.635 [2024-11-06 13:54:05.873593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.873604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.873609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.873614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.873625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.883555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.883602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.883613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.883619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.883624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.883635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.893583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.893637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.893647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.893653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.893658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.893669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.903610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.903661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.903671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.903676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.903681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.903691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.913705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.913763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.913774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.913783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.913787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.913799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.923626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.923672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.923682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.923687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.923692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.923703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.933718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.933777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.933787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.933792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.933798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.933808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.943739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.943795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.943804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.943810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.943814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.943825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.953743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.953799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.953808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.953814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.953819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.953832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.963741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.963788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.963799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.963804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.963809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.963819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.973815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.973869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.973879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.973884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.973889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.973899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.983850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.983904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.983914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.983920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.983924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.983935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.636 [2024-11-06 13:54:05.993866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.636 [2024-11-06 13:54:05.993914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.636 [2024-11-06 13:54:05.993924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.636 [2024-11-06 13:54:05.993929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.636 [2024-11-06 13:54:05.993934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.636 [2024-11-06 13:54:05.993944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.636 qpair failed and we were unable to recover it. 00:29:42.637 [2024-11-06 13:54:06.003877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.637 [2024-11-06 13:54:06.003919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.637 [2024-11-06 13:54:06.003930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.637 [2024-11-06 13:54:06.003935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.637 [2024-11-06 13:54:06.003940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.637 [2024-11-06 13:54:06.003951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.637 qpair failed and we were unable to recover it. 00:29:42.899 [2024-11-06 13:54:06.013801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.899 [2024-11-06 13:54:06.013849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.899 [2024-11-06 13:54:06.013859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.899 [2024-11-06 13:54:06.013865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.899 [2024-11-06 13:54:06.013869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.899 [2024-11-06 13:54:06.013880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.899 qpair failed and we were unable to recover it. 00:29:42.899 [2024-11-06 13:54:06.023956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.899 [2024-11-06 13:54:06.024005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.899 [2024-11-06 13:54:06.024015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.899 [2024-11-06 13:54:06.024021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.024025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.024035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.033877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.033930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.033942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.033948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.033952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.033964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.044001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.044049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.044062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.044067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.044072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.044083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.054061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.054112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.054121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.054126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.054131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.054142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.064116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.064164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.064173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.064179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.064183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.064194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.074105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.074186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.074197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.074202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.074206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.074217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.084098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.084142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.084152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.084158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.084165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.084176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.094200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.094259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.094268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.094274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.094279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.094289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.104207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.104257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.104267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.104272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.104277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.104287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.114226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.114268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.114278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.114283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.114288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.114299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.124203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.124250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.124260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.124265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.124269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.124279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.134285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.134335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.134346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.134351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.134356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.134366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.144319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.144372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.144383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.900 [2024-11-06 13:54:06.144388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.900 [2024-11-06 13:54:06.144393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.900 [2024-11-06 13:54:06.144404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.900 qpair failed and we were unable to recover it. 00:29:42.900 [2024-11-06 13:54:06.154332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.900 [2024-11-06 13:54:06.154384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.900 [2024-11-06 13:54:06.154394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.154400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.154404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.154415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.164319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.164359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.164369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.164375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.164379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.164390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.174386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.174441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.174454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.174459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.174465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.174476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.184410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.184466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.184476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.184481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.184486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.184497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.194307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.194370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.194381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.194386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.194391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.194402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.204419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.204463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.204473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.204478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.204483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.204493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.214465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.214518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.214537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.214543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.214552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.214566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.224530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.224582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.224593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.224599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.224604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.224615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.234537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.234584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.234594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.234599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.234604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.234615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.244531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.244581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.244591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.244596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.244601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.244611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.254611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.254670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.254680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.254686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.254690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.254701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:42.901 [2024-11-06 13:54:06.264636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.901 [2024-11-06 13:54:06.264711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.901 [2024-11-06 13:54:06.264721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.901 [2024-11-06 13:54:06.264727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.901 [2024-11-06 13:54:06.264731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:42.901 [2024-11-06 13:54:06.264742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:42.901 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.274601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.274651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.274662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.274667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.274672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.274683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.284633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.284676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.284686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.284691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.284696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.284707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.294704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.294759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.294769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.294775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.294779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.294790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.304727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.304786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.304799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.304804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.304809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.304820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.314752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.314803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.314813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.314819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.314823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.314834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.324710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.324757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.324767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.324773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.324777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.324788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.334818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.334868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.334878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.334884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.334888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.334899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.344854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.344910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.344920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.344927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.344932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.344943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.354872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.354925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.354934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.354940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.354944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.354955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.364840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.364883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.364893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.364898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.364903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.364913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.374930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.375031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.375042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.375047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.375053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.164 [2024-11-06 13:54:06.375063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:54:06.384973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.164 [2024-11-06 13:54:06.385029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.164 [2024-11-06 13:54:06.385039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.164 [2024-11-06 13:54:06.385045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.164 [2024-11-06 13:54:06.385050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.385060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.394985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.395064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.395074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.395079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.395083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.395094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.404961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.405011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.405020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.405025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.405030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.405041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.415002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.415056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.415066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.415071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.415076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.415086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.425078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.425132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.425142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.425147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.425152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.425163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.435093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.435145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.435155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.435160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.435164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.435175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.445077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.445131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.445140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.445146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.445150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.445161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.455145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.455197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.455207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.455212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.455217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.455227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.465181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.465228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.465237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.465242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.465247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.465257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.475200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.475247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.475257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.475265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.475270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.475280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.485178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.485228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.485237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.485242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.485247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.485258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.495194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.495273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.495283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.495288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.495293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.495303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.505211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.165 [2024-11-06 13:54:06.505263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.165 [2024-11-06 13:54:06.505273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.165 [2024-11-06 13:54:06.505278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.165 [2024-11-06 13:54:06.505283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.165 [2024-11-06 13:54:06.505293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:54:06.515205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.166 [2024-11-06 13:54:06.515304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.166 [2024-11-06 13:54:06.515316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.166 [2024-11-06 13:54:06.515322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.166 [2024-11-06 13:54:06.515327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.166 [2024-11-06 13:54:06.515340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.166 qpair failed and we were unable to recover it. 00:29:43.166 [2024-11-06 13:54:06.525286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.166 [2024-11-06 13:54:06.525330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.166 [2024-11-06 13:54:06.525340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.166 [2024-11-06 13:54:06.525345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.166 [2024-11-06 13:54:06.525350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.166 [2024-11-06 13:54:06.525361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.166 qpair failed and we were unable to recover it. 00:29:43.166 [2024-11-06 13:54:06.535325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.166 [2024-11-06 13:54:06.535376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.166 [2024-11-06 13:54:06.535386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.166 [2024-11-06 13:54:06.535391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.166 [2024-11-06 13:54:06.535396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.166 [2024-11-06 13:54:06.535406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.166 qpair failed and we were unable to recover it. 00:29:43.428 [2024-11-06 13:54:06.545394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.428 [2024-11-06 13:54:06.545447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.428 [2024-11-06 13:54:06.545458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.428 [2024-11-06 13:54:06.545463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.428 [2024-11-06 13:54:06.545468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.428 [2024-11-06 13:54:06.545479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.428 qpair failed and we were unable to recover it. 00:29:43.428 [2024-11-06 13:54:06.555432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.428 [2024-11-06 13:54:06.555480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.428 [2024-11-06 13:54:06.555491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.428 [2024-11-06 13:54:06.555496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.428 [2024-11-06 13:54:06.555500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.428 [2024-11-06 13:54:06.555511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.428 qpair failed and we were unable to recover it. 00:29:43.428 [2024-11-06 13:54:06.565366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.428 [2024-11-06 13:54:06.565412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.428 [2024-11-06 13:54:06.565423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.428 [2024-11-06 13:54:06.565429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.428 [2024-11-06 13:54:06.565433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.428 [2024-11-06 13:54:06.565444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.428 qpair failed and we were unable to recover it. 00:29:43.428 [2024-11-06 13:54:06.575476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.428 [2024-11-06 13:54:06.575530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.428 [2024-11-06 13:54:06.575541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.428 [2024-11-06 13:54:06.575546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.428 [2024-11-06 13:54:06.575551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.428 [2024-11-06 13:54:06.575561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.428 qpair failed and we were unable to recover it. 00:29:43.428 [2024-11-06 13:54:06.585502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.428 [2024-11-06 13:54:06.585555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.428 [2024-11-06 13:54:06.585573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.428 [2024-11-06 13:54:06.585579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.428 [2024-11-06 13:54:06.585585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.428 [2024-11-06 13:54:06.585599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.428 qpair failed and we were unable to recover it. 00:29:43.428 [2024-11-06 13:54:06.595518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.428 [2024-11-06 13:54:06.595567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.428 [2024-11-06 13:54:06.595579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.428 [2024-11-06 13:54:06.595585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.428 [2024-11-06 13:54:06.595589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.428 [2024-11-06 13:54:06.595601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.428 qpair failed and we were unable to recover it. 00:29:43.428 [2024-11-06 13:54:06.605439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.428 [2024-11-06 13:54:06.605515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.428 [2024-11-06 13:54:06.605529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.428 [2024-11-06 13:54:06.605535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.428 [2024-11-06 13:54:06.605540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.428 [2024-11-06 13:54:06.605552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.428 qpair failed and we were unable to recover it. 00:29:43.428 [2024-11-06 13:54:06.615576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.428 [2024-11-06 13:54:06.615628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.428 [2024-11-06 13:54:06.615640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.428 [2024-11-06 13:54:06.615645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.428 [2024-11-06 13:54:06.615650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.428 [2024-11-06 13:54:06.615661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.428 qpair failed and we were unable to recover it. 00:29:43.428 [2024-11-06 13:54:06.625593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.428 [2024-11-06 13:54:06.625685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.428 [2024-11-06 13:54:06.625696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.428 [2024-11-06 13:54:06.625702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.625707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.625717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.635623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.635669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.635679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.635685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.635690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.635701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.645642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.645724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.645734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.645739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.645751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.645763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.655699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.655752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.655762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.655768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.655772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.655783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.665707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.665779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.665789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.665794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.665799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.665809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.675707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.675798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.675808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.675814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.675818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.675829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.685724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.685799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.685809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.685815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.685819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.685830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.695794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.695843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.695853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.695858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.695862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.695873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.705837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.705892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.705902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.705907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.705912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.705922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.715859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.715927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.715937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.715942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.715947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.715957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.725831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.725892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.725902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.725907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.725912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.725922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.735895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.735944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.735957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.735962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.735966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.735977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.429 [2024-11-06 13:54:06.745958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.429 [2024-11-06 13:54:06.746003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.429 [2024-11-06 13:54:06.746014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.429 [2024-11-06 13:54:06.746019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.429 [2024-11-06 13:54:06.746023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.429 [2024-11-06 13:54:06.746033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.429 qpair failed and we were unable to recover it. 00:29:43.430 [2024-11-06 13:54:06.755994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.430 [2024-11-06 13:54:06.756055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.430 [2024-11-06 13:54:06.756066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.430 [2024-11-06 13:54:06.756071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.430 [2024-11-06 13:54:06.756075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.430 [2024-11-06 13:54:06.756086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.430 qpair failed and we were unable to recover it. 00:29:43.430 [2024-11-06 13:54:06.765946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.430 [2024-11-06 13:54:06.765994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.430 [2024-11-06 13:54:06.766004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.430 [2024-11-06 13:54:06.766010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.430 [2024-11-06 13:54:06.766014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.430 [2024-11-06 13:54:06.766025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.430 qpair failed and we were unable to recover it. 00:29:43.430 [2024-11-06 13:54:06.775991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.430 [2024-11-06 13:54:06.776043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.430 [2024-11-06 13:54:06.776053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.430 [2024-11-06 13:54:06.776059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.430 [2024-11-06 13:54:06.776066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.430 [2024-11-06 13:54:06.776076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.430 qpair failed and we were unable to recover it. 00:29:43.430 [2024-11-06 13:54:06.786050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.430 [2024-11-06 13:54:06.786105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.430 [2024-11-06 13:54:06.786115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.430 [2024-11-06 13:54:06.786120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.430 [2024-11-06 13:54:06.786125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.430 [2024-11-06 13:54:06.786135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.430 qpair failed and we were unable to recover it. 00:29:43.430 [2024-11-06 13:54:06.796025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.430 [2024-11-06 13:54:06.796109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.430 [2024-11-06 13:54:06.796118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.430 [2024-11-06 13:54:06.796124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.430 [2024-11-06 13:54:06.796129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.430 [2024-11-06 13:54:06.796139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.430 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.806035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.693 [2024-11-06 13:54:06.806076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.693 [2024-11-06 13:54:06.806086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.693 [2024-11-06 13:54:06.806091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.693 [2024-11-06 13:54:06.806096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.693 [2024-11-06 13:54:06.806106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.693 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.816123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.693 [2024-11-06 13:54:06.816177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.693 [2024-11-06 13:54:06.816186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.693 [2024-11-06 13:54:06.816192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.693 [2024-11-06 13:54:06.816196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.693 [2024-11-06 13:54:06.816207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.693 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.826038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.693 [2024-11-06 13:54:06.826118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.693 [2024-11-06 13:54:06.826128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.693 [2024-11-06 13:54:06.826133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.693 [2024-11-06 13:54:06.826138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.693 [2024-11-06 13:54:06.826148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.693 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.836170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.693 [2024-11-06 13:54:06.836223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.693 [2024-11-06 13:54:06.836232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.693 [2024-11-06 13:54:06.836238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.693 [2024-11-06 13:54:06.836242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.693 [2024-11-06 13:54:06.836253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.693 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.846160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.693 [2024-11-06 13:54:06.846206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.693 [2024-11-06 13:54:06.846216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.693 [2024-11-06 13:54:06.846221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.693 [2024-11-06 13:54:06.846226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.693 [2024-11-06 13:54:06.846236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.693 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.856230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.693 [2024-11-06 13:54:06.856284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.693 [2024-11-06 13:54:06.856294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.693 [2024-11-06 13:54:06.856299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.693 [2024-11-06 13:54:06.856303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.693 [2024-11-06 13:54:06.856314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.693 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.866270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.693 [2024-11-06 13:54:06.866339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.693 [2024-11-06 13:54:06.866352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.693 [2024-11-06 13:54:06.866358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.693 [2024-11-06 13:54:06.866362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.693 [2024-11-06 13:54:06.866373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.693 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.876272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.693 [2024-11-06 13:54:06.876328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.693 [2024-11-06 13:54:06.876339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.693 [2024-11-06 13:54:06.876344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.693 [2024-11-06 13:54:06.876349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.693 [2024-11-06 13:54:06.876359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.693 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.886278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.693 [2024-11-06 13:54:06.886332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.693 [2024-11-06 13:54:06.886342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.693 [2024-11-06 13:54:06.886347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.693 [2024-11-06 13:54:06.886352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.693 [2024-11-06 13:54:06.886362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.693 qpair failed and we were unable to recover it. 00:29:43.693 [2024-11-06 13:54:06.896360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.896456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.896466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.896472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.896477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.896487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.906410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.906473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.906483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.906491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.906495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.906506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.916400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.916495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.916505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.916510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.916515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.916525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.926392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.926430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.926440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.926445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.926450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.926460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.936450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.936499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.936510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.936515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.936521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.936534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.946485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.946538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.946548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.946553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.946557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.946571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.956535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.956580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.956590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.956596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.956600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.956610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.966498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.966540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.966550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.966555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.966560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.966570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.976571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.976635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.976645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.976650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.976654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.976665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.986467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.986519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.986529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.986534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.986539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.986549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:06.996606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:06.996656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:06.996667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:06.996672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:06.996676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:06.996687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:07.006592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:07.006635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:07.006645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:07.006650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:07.006655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:07.006665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:07.016672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:07.016722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.694 [2024-11-06 13:54:07.016732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.694 [2024-11-06 13:54:07.016737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.694 [2024-11-06 13:54:07.016741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.694 [2024-11-06 13:54:07.016756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.694 qpair failed and we were unable to recover it. 00:29:43.694 [2024-11-06 13:54:07.026723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.694 [2024-11-06 13:54:07.026774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.695 [2024-11-06 13:54:07.026785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.695 [2024-11-06 13:54:07.026790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.695 [2024-11-06 13:54:07.026794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.695 [2024-11-06 13:54:07.026805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.695 qpair failed and we were unable to recover it. 00:29:43.695 [2024-11-06 13:54:07.036702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.695 [2024-11-06 13:54:07.036784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.695 [2024-11-06 13:54:07.036793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.695 [2024-11-06 13:54:07.036801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.695 [2024-11-06 13:54:07.036807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.695 [2024-11-06 13:54:07.036818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.695 qpair failed and we were unable to recover it. 00:29:43.695 [2024-11-06 13:54:07.046716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.695 [2024-11-06 13:54:07.046772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.695 [2024-11-06 13:54:07.046782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.695 [2024-11-06 13:54:07.046788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.695 [2024-11-06 13:54:07.046792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.695 [2024-11-06 13:54:07.046803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.695 qpair failed and we were unable to recover it. 00:29:43.695 [2024-11-06 13:54:07.056814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.695 [2024-11-06 13:54:07.056869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.695 [2024-11-06 13:54:07.056879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.695 [2024-11-06 13:54:07.056885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.695 [2024-11-06 13:54:07.056891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.695 [2024-11-06 13:54:07.056901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.695 qpair failed and we were unable to recover it. 00:29:43.955 [2024-11-06 13:54:07.066858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.955 [2024-11-06 13:54:07.066908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.955 [2024-11-06 13:54:07.066918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.955 [2024-11-06 13:54:07.066923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.955 [2024-11-06 13:54:07.066928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.955 [2024-11-06 13:54:07.066939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.955 qpair failed and we were unable to recover it. 00:29:43.955 [2024-11-06 13:54:07.076808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.955 [2024-11-06 13:54:07.076861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.955 [2024-11-06 13:54:07.076871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.955 [2024-11-06 13:54:07.076877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.955 [2024-11-06 13:54:07.076881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.955 [2024-11-06 13:54:07.076895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.955 qpair failed and we were unable to recover it. 00:29:43.955 [2024-11-06 13:54:07.086777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.955 [2024-11-06 13:54:07.086821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.955 [2024-11-06 13:54:07.086831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.955 [2024-11-06 13:54:07.086836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.955 [2024-11-06 13:54:07.086840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.955 [2024-11-06 13:54:07.086851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.955 qpair failed and we were unable to recover it. 00:29:43.955 [2024-11-06 13:54:07.096913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.955 [2024-11-06 13:54:07.096962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.955 [2024-11-06 13:54:07.096972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.955 [2024-11-06 13:54:07.096977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.955 [2024-11-06 13:54:07.096982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.955 [2024-11-06 13:54:07.096992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.955 qpair failed and we were unable to recover it. 00:29:43.955 [2024-11-06 13:54:07.106925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.955 [2024-11-06 13:54:07.106978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.955 [2024-11-06 13:54:07.106988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.955 [2024-11-06 13:54:07.106993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.955 [2024-11-06 13:54:07.106999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.955 [2024-11-06 13:54:07.107009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.955 qpair failed and we were unable to recover it. 00:29:43.955 [2024-11-06 13:54:07.116842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.955 [2024-11-06 13:54:07.116892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.955 [2024-11-06 13:54:07.116901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.955 [2024-11-06 13:54:07.116907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.955 [2024-11-06 13:54:07.116911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.955 [2024-11-06 13:54:07.116922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.955 qpair failed and we were unable to recover it. 00:29:43.955 [2024-11-06 13:54:07.126918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.955 [2024-11-06 13:54:07.126962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.955 [2024-11-06 13:54:07.126972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.955 [2024-11-06 13:54:07.126978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.955 [2024-11-06 13:54:07.126982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.955 [2024-11-06 13:54:07.126993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.955 qpair failed and we were unable to recover it. 00:29:43.955 [2024-11-06 13:54:07.136949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.136992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.137002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.137007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.137012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.137022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.147017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.147069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.147078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.147084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.147088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.147099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.157024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.157066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.157075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.157080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.157085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.157096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.167022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.167060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.167072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.167077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.167082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.167092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.177066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.177107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.177117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.177122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.177127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.177137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.187094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.187140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.187150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.187155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.187160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.187170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.197113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.197161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.197171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.197176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.197180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.197191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.207131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.207174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.207183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.207188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.207196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.207207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.217150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.217197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.217207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.217212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.217216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.217227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.227230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.227274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.227284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.227289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.227293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.227303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.237132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.237180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.237190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.237195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.237200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.237210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.247256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.247336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.247346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.247351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.247356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.247366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.257142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.257185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.257194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.257199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.257204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.257214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.267224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.267277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.267288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.267293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.267298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.267309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.277382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.277467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.277478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.277483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.277489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.277500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.287306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.287348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.287358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.287363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.287368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.287379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.297247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.297289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.297301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.956 [2024-11-06 13:54:07.297307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.956 [2024-11-06 13:54:07.297311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.956 [2024-11-06 13:54:07.297322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.956 qpair failed and we were unable to recover it. 00:29:43.956 [2024-11-06 13:54:07.307428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.956 [2024-11-06 13:54:07.307478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.956 [2024-11-06 13:54:07.307488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.957 [2024-11-06 13:54:07.307493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.957 [2024-11-06 13:54:07.307498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.957 [2024-11-06 13:54:07.307509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.957 qpair failed and we were unable to recover it. 00:29:43.957 [2024-11-06 13:54:07.317450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.957 [2024-11-06 13:54:07.317497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.957 [2024-11-06 13:54:07.317507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.957 [2024-11-06 13:54:07.317512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.957 [2024-11-06 13:54:07.317517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.957 [2024-11-06 13:54:07.317527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.957 qpair failed and we were unable to recover it. 00:29:43.957 [2024-11-06 13:54:07.327315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.957 [2024-11-06 13:54:07.327355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.957 [2024-11-06 13:54:07.327364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.957 [2024-11-06 13:54:07.327370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.957 [2024-11-06 13:54:07.327375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:43.957 [2024-11-06 13:54:07.327385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.957 qpair failed and we were unable to recover it. 00:29:44.218 [2024-11-06 13:54:07.337477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.218 [2024-11-06 13:54:07.337522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.218 [2024-11-06 13:54:07.337531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.218 [2024-11-06 13:54:07.337536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.218 [2024-11-06 13:54:07.337544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.218 [2024-11-06 13:54:07.337555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.218 qpair failed and we were unable to recover it. 00:29:44.218 [2024-11-06 13:54:07.347552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.218 [2024-11-06 13:54:07.347602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.218 [2024-11-06 13:54:07.347620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.218 [2024-11-06 13:54:07.347627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.218 [2024-11-06 13:54:07.347632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.218 [2024-11-06 13:54:07.347646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.218 qpair failed and we were unable to recover it. 00:29:44.218 [2024-11-06 13:54:07.357571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.218 [2024-11-06 13:54:07.357617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.218 [2024-11-06 13:54:07.357628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.218 [2024-11-06 13:54:07.357633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.218 [2024-11-06 13:54:07.357638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.218 [2024-11-06 13:54:07.357650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.218 qpair failed and we were unable to recover it. 00:29:44.218 [2024-11-06 13:54:07.367524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.218 [2024-11-06 13:54:07.367568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.218 [2024-11-06 13:54:07.367578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.218 [2024-11-06 13:54:07.367583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.218 [2024-11-06 13:54:07.367588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.218 [2024-11-06 13:54:07.367599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.218 qpair failed and we were unable to recover it. 00:29:44.218 [2024-11-06 13:54:07.377582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.218 [2024-11-06 13:54:07.377628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.218 [2024-11-06 13:54:07.377639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.218 [2024-11-06 13:54:07.377644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.218 [2024-11-06 13:54:07.377649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.218 [2024-11-06 13:54:07.377659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.218 qpair failed and we were unable to recover it. 00:29:44.218 [2024-11-06 13:54:07.387640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.218 [2024-11-06 13:54:07.387684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.218 [2024-11-06 13:54:07.387695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.218 [2024-11-06 13:54:07.387700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.218 [2024-11-06 13:54:07.387704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.218 [2024-11-06 13:54:07.387715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.218 qpair failed and we were unable to recover it. 00:29:44.218 [2024-11-06 13:54:07.397650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.218 [2024-11-06 13:54:07.397692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.218 [2024-11-06 13:54:07.397702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.218 [2024-11-06 13:54:07.397707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.218 [2024-11-06 13:54:07.397712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.397723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.407671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.407712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.407721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.407727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.407731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.407742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.417680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.417725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.417736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.417741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.417749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.417761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.427773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.427818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.427831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.427836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.427841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.427852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.437787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.437834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.437844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.437849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.437854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.437864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.447771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.447809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.447818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.447824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.447828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.447839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.457815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.457859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.457869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.457874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.457879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.457890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.467848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.467895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.467905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.467913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.467918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.467929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.477936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.477982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.477992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.477997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.478002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.478013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.487890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.487927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.487937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.487942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.487947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.487958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.497871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.497913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.497924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.497929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.497933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.497944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.507988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.508036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.508046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.508051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.508056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.508069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.518047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.518094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.518105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.518110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.518115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.219 [2024-11-06 13:54:07.518125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.219 qpair failed and we were unable to recover it. 00:29:44.219 [2024-11-06 13:54:07.527980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.219 [2024-11-06 13:54:07.528018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.219 [2024-11-06 13:54:07.528028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.219 [2024-11-06 13:54:07.528033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.219 [2024-11-06 13:54:07.528038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.220 [2024-11-06 13:54:07.528048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.220 qpair failed and we were unable to recover it. 00:29:44.220 [2024-11-06 13:54:07.538036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.220 [2024-11-06 13:54:07.538082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.220 [2024-11-06 13:54:07.538092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.220 [2024-11-06 13:54:07.538098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.220 [2024-11-06 13:54:07.538102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.220 [2024-11-06 13:54:07.538113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.220 qpair failed and we were unable to recover it. 00:29:44.220 [2024-11-06 13:54:07.548059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.220 [2024-11-06 13:54:07.548106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.220 [2024-11-06 13:54:07.548116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.220 [2024-11-06 13:54:07.548122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.220 [2024-11-06 13:54:07.548126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.220 [2024-11-06 13:54:07.548137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.220 qpair failed and we were unable to recover it. 00:29:44.220 [2024-11-06 13:54:07.558112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.220 [2024-11-06 13:54:07.558159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.220 [2024-11-06 13:54:07.558169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.220 [2024-11-06 13:54:07.558174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.220 [2024-11-06 13:54:07.558178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.220 [2024-11-06 13:54:07.558189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.220 qpair failed and we were unable to recover it. 00:29:44.220 [2024-11-06 13:54:07.568087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.220 [2024-11-06 13:54:07.568129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.220 [2024-11-06 13:54:07.568139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.220 [2024-11-06 13:54:07.568144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.220 [2024-11-06 13:54:07.568149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.220 [2024-11-06 13:54:07.568159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.220 qpair failed and we were unable to recover it. 00:29:44.220 [2024-11-06 13:54:07.578119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.220 [2024-11-06 13:54:07.578164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.220 [2024-11-06 13:54:07.578174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.220 [2024-11-06 13:54:07.578179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.220 [2024-11-06 13:54:07.578184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.220 [2024-11-06 13:54:07.578194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.220 qpair failed and we were unable to recover it. 00:29:44.220 [2024-11-06 13:54:07.588200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.220 [2024-11-06 13:54:07.588247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.220 [2024-11-06 13:54:07.588257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.220 [2024-11-06 13:54:07.588262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.220 [2024-11-06 13:54:07.588266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.220 [2024-11-06 13:54:07.588276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.220 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.598220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.598274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.598284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.598292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.598297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.598308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.608211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.608267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.608277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.608282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.608288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.608298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.618242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.618284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.618294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.618299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.618304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.618314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.628310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.628388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.628398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.628404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.628409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.628420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.638304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.638345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.638354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.638359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.638364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.638378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.648311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.648351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.648361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.648367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.648371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.648382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.658310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.658352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.658362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.658368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.658372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.658383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.668419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.668497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.668507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.668512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.668517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.668528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.678422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.678468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.678487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.678493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.678499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.678514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.688405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.688449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.688467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.688474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.688479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.688493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.698460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.698507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.698526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.698532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.698537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.698551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.483 [2024-11-06 13:54:07.708520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.483 [2024-11-06 13:54:07.708572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.483 [2024-11-06 13:54:07.708590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.483 [2024-11-06 13:54:07.708596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.483 [2024-11-06 13:54:07.708601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.483 [2024-11-06 13:54:07.708615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.483 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.718547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.718591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.718602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.718607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.718612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.718624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.728537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.728578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.728592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.728598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.728602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.728613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.738433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.738477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.738489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.738495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.738500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.738511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.748594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.748638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.748649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.748654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.748659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.748670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.758642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.758685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.758694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.758699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.758704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.758715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.768626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.768680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.768689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.768695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.768702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.768713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.778689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.778733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.778744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.778755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.778760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.778771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.788732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.788782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.788792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.788797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.788802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.788813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.798765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.798809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.798819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.798824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.798829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.798840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.808758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.808802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.808813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.808818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.808823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.808834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.818783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.818824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.818834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.818839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.818844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.818855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.828901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.828956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.828966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.828971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.828975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.484 [2024-11-06 13:54:07.828986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.484 qpair failed and we were unable to recover it. 00:29:44.484 [2024-11-06 13:54:07.838733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.484 [2024-11-06 13:54:07.838782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.484 [2024-11-06 13:54:07.838793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.484 [2024-11-06 13:54:07.838798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.484 [2024-11-06 13:54:07.838803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.485 [2024-11-06 13:54:07.838814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.485 qpair failed and we were unable to recover it. 00:29:44.485 [2024-11-06 13:54:07.848852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.485 [2024-11-06 13:54:07.848897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.485 [2024-11-06 13:54:07.848907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.485 [2024-11-06 13:54:07.848913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.485 [2024-11-06 13:54:07.848917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.485 [2024-11-06 13:54:07.848928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.485 qpair failed and we were unable to recover it. 00:29:44.747 [2024-11-06 13:54:07.858891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.747 [2024-11-06 13:54:07.858934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.747 [2024-11-06 13:54:07.858947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.747 [2024-11-06 13:54:07.858952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.747 [2024-11-06 13:54:07.858957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.747 [2024-11-06 13:54:07.858967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.747 qpair failed and we were unable to recover it. 00:29:44.747 [2024-11-06 13:54:07.869015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.747 [2024-11-06 13:54:07.869063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.747 [2024-11-06 13:54:07.869073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.747 [2024-11-06 13:54:07.869078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.747 [2024-11-06 13:54:07.869083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.747 [2024-11-06 13:54:07.869093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.747 qpair failed and we were unable to recover it. 00:29:44.747 [2024-11-06 13:54:07.878955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.747 [2024-11-06 13:54:07.879007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.747 [2024-11-06 13:54:07.879017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.747 [2024-11-06 13:54:07.879022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.747 [2024-11-06 13:54:07.879028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.747 [2024-11-06 13:54:07.879038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.747 qpair failed and we were unable to recover it. 00:29:44.747 [2024-11-06 13:54:07.888971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.747 [2024-11-06 13:54:07.889008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.747 [2024-11-06 13:54:07.889019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.747 [2024-11-06 13:54:07.889024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.747 [2024-11-06 13:54:07.889029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.747 [2024-11-06 13:54:07.889039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.747 qpair failed and we were unable to recover it. 00:29:44.747 [2024-11-06 13:54:07.899013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.747 [2024-11-06 13:54:07.899055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.747 [2024-11-06 13:54:07.899065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.747 [2024-11-06 13:54:07.899070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.747 [2024-11-06 13:54:07.899078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.747 [2024-11-06 13:54:07.899088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.747 qpair failed and we were unable to recover it. 00:29:44.747 [2024-11-06 13:54:07.909046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.747 [2024-11-06 13:54:07.909093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.747 [2024-11-06 13:54:07.909103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.747 [2024-11-06 13:54:07.909108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.747 [2024-11-06 13:54:07.909113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.747 [2024-11-06 13:54:07.909123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.747 qpair failed and we were unable to recover it. 00:29:44.747 [2024-11-06 13:54:07.919089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.747 [2024-11-06 13:54:07.919133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:07.919143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:07.919148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:07.919152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:07.919163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:07.929059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:07.929142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:07.929152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:07.929157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:07.929162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:07.929172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:07.938963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:07.939004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:07.939014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:07.939019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:07.939024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:07.939034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:07.949205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:07.949277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:07.949287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:07.949292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:07.949297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:07.949307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:07.959156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:07.959246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:07.959256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:07.959261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:07.959266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:07.959276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:07.969144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:07.969183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:07.969192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:07.969198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:07.969202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:07.969213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:07.979185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:07.979228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:07.979239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:07.979244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:07.979249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:07.979259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:07.989253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:07.989326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:07.989338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:07.989343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:07.989348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:07.989359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:07.999242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:07.999282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:07.999291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:07.999296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:07.999301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:07.999311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:08.009290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:08.009330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:08.009339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:08.009345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:08.009349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:08.009360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:08.019329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:08.019404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:08.019414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:08.019419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:08.019424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:08.019434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.748 qpair failed and we were unable to recover it. 00:29:44.748 [2024-11-06 13:54:08.029386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.748 [2024-11-06 13:54:08.029435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.748 [2024-11-06 13:54:08.029444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.748 [2024-11-06 13:54:08.029453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.748 [2024-11-06 13:54:08.029458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.748 [2024-11-06 13:54:08.029468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:44.749 [2024-11-06 13:54:08.039416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.749 [2024-11-06 13:54:08.039463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.749 [2024-11-06 13:54:08.039473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.749 [2024-11-06 13:54:08.039478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.749 [2024-11-06 13:54:08.039483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.749 [2024-11-06 13:54:08.039493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:44.749 [2024-11-06 13:54:08.049408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.749 [2024-11-06 13:54:08.049451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.749 [2024-11-06 13:54:08.049461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.749 [2024-11-06 13:54:08.049466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.749 [2024-11-06 13:54:08.049471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.749 [2024-11-06 13:54:08.049482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:44.749 [2024-11-06 13:54:08.059403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.749 [2024-11-06 13:54:08.059444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.749 [2024-11-06 13:54:08.059454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.749 [2024-11-06 13:54:08.059459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.749 [2024-11-06 13:54:08.059465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.749 [2024-11-06 13:54:08.059475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:44.749 [2024-11-06 13:54:08.069508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.749 [2024-11-06 13:54:08.069566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.749 [2024-11-06 13:54:08.069584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.749 [2024-11-06 13:54:08.069591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.749 [2024-11-06 13:54:08.069596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.749 [2024-11-06 13:54:08.069615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:44.749 [2024-11-06 13:54:08.079390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.749 [2024-11-06 13:54:08.079443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.749 [2024-11-06 13:54:08.079454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.749 [2024-11-06 13:54:08.079460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.749 [2024-11-06 13:54:08.079464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.749 [2024-11-06 13:54:08.079476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:44.749 [2024-11-06 13:54:08.089487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.749 [2024-11-06 13:54:08.089529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.749 [2024-11-06 13:54:08.089540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.749 [2024-11-06 13:54:08.089545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.749 [2024-11-06 13:54:08.089550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.749 [2024-11-06 13:54:08.089561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:44.749 [2024-11-06 13:54:08.099523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.749 [2024-11-06 13:54:08.099568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.749 [2024-11-06 13:54:08.099579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.749 [2024-11-06 13:54:08.099584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.749 [2024-11-06 13:54:08.099589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.749 [2024-11-06 13:54:08.099600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:44.749 [2024-11-06 13:54:08.109603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.749 [2024-11-06 13:54:08.109653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.749 [2024-11-06 13:54:08.109663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.749 [2024-11-06 13:54:08.109668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.749 [2024-11-06 13:54:08.109673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.749 [2024-11-06 13:54:08.109684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:44.749 [2024-11-06 13:54:08.119501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.749 [2024-11-06 13:54:08.119563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.749 [2024-11-06 13:54:08.119573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.749 [2024-11-06 13:54:08.119578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.749 [2024-11-06 13:54:08.119583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:44.749 [2024-11-06 13:54:08.119593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.749 qpair failed and we were unable to recover it. 00:29:45.012 [2024-11-06 13:54:08.129604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.012 [2024-11-06 13:54:08.129680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.012 [2024-11-06 13:54:08.129690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.012 [2024-11-06 13:54:08.129696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.012 [2024-11-06 13:54:08.129700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.012 [2024-11-06 13:54:08.129711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.012 qpair failed and we were unable to recover it. 00:29:45.012 [2024-11-06 13:54:08.139604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.012 [2024-11-06 13:54:08.139649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.012 [2024-11-06 13:54:08.139659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.012 [2024-11-06 13:54:08.139664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.012 [2024-11-06 13:54:08.139669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.012 [2024-11-06 13:54:08.139680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.012 qpair failed and we were unable to recover it. 00:29:45.012 [2024-11-06 13:54:08.149704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.012 [2024-11-06 13:54:08.149759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.012 [2024-11-06 13:54:08.149769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.012 [2024-11-06 13:54:08.149774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.012 [2024-11-06 13:54:08.149779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.012 [2024-11-06 13:54:08.149789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.012 qpair failed and we were unable to recover it. 00:29:45.012 [2024-11-06 13:54:08.159714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.012 [2024-11-06 13:54:08.159760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.012 [2024-11-06 13:54:08.159770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.012 [2024-11-06 13:54:08.159779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.012 [2024-11-06 13:54:08.159784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.012 [2024-11-06 13:54:08.159794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.012 qpair failed and we were unable to recover it. 00:29:45.012 [2024-11-06 13:54:08.169718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.012 [2024-11-06 13:54:08.169770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.012 [2024-11-06 13:54:08.169780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.012 [2024-11-06 13:54:08.169786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.012 [2024-11-06 13:54:08.169790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.012 [2024-11-06 13:54:08.169801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.012 qpair failed and we were unable to recover it. 00:29:45.012 [2024-11-06 13:54:08.179753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.012 [2024-11-06 13:54:08.179796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.012 [2024-11-06 13:54:08.179806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.012 [2024-11-06 13:54:08.179812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.012 [2024-11-06 13:54:08.179816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.012 [2024-11-06 13:54:08.179827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.012 qpair failed and we were unable to recover it. 00:29:45.012 [2024-11-06 13:54:08.189847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.012 [2024-11-06 13:54:08.189927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.012 [2024-11-06 13:54:08.189937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.012 [2024-11-06 13:54:08.189942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.012 [2024-11-06 13:54:08.189946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.012 [2024-11-06 13:54:08.189957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.012 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.199836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.199880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.199890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.199895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.199900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.199914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.209826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.209866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.209876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.209881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.209885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.209896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.219842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.219892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.219902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.219907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.219912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.219922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.229931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.229975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.229985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.229990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.229995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.230006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.239908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.239958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.239968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.239974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.239978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.239988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.249939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.249976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.249986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.249991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.249996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.250006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.259963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.260054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.260064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.260069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.260074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.260085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.269921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.269999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.270009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.270014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.270019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.270031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.280053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.280102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.280111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.280117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.280121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.280132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.290082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.290144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.290156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.290162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.290166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.290177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.299947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.299990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.299999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.300005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.300009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.300020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.310035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.310082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.310092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.310097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.310102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.310112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.013 [2024-11-06 13:54:08.320136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.013 [2024-11-06 13:54:08.320178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.013 [2024-11-06 13:54:08.320188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.013 [2024-11-06 13:54:08.320193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.013 [2024-11-06 13:54:08.320198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.013 [2024-11-06 13:54:08.320209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.013 qpair failed and we were unable to recover it. 00:29:45.014 [2024-11-06 13:54:08.330025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.014 [2024-11-06 13:54:08.330066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.014 [2024-11-06 13:54:08.330075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.014 [2024-11-06 13:54:08.330080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.014 [2024-11-06 13:54:08.330088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.014 [2024-11-06 13:54:08.330098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.014 qpair failed and we were unable to recover it. 00:29:45.014 [2024-11-06 13:54:08.340204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.014 [2024-11-06 13:54:08.340248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.014 [2024-11-06 13:54:08.340258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.014 [2024-11-06 13:54:08.340263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.014 [2024-11-06 13:54:08.340267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.014 [2024-11-06 13:54:08.340277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.014 qpair failed and we were unable to recover it. 00:29:45.014 [2024-11-06 13:54:08.350281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.014 [2024-11-06 13:54:08.350324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.014 [2024-11-06 13:54:08.350334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.014 [2024-11-06 13:54:08.350339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.014 [2024-11-06 13:54:08.350343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.014 [2024-11-06 13:54:08.350353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.014 qpair failed and we were unable to recover it. 00:29:45.014 [2024-11-06 13:54:08.360256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.014 [2024-11-06 13:54:08.360309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.014 [2024-11-06 13:54:08.360318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.014 [2024-11-06 13:54:08.360323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.014 [2024-11-06 13:54:08.360328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.014 [2024-11-06 13:54:08.360338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.014 qpair failed and we were unable to recover it. 00:29:45.014 [2024-11-06 13:54:08.370176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.014 [2024-11-06 13:54:08.370213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.014 [2024-11-06 13:54:08.370224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.014 [2024-11-06 13:54:08.370229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.014 [2024-11-06 13:54:08.370233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb84000b90 00:29:45.014 [2024-11-06 13:54:08.370244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.014 qpair failed and we were unable to recover it. 00:29:45.014 [2024-11-06 13:54:08.370628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46e00 is same with the state(6) to be set 00:29:45.014 [2024-11-06 13:54:08.380294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.014 [2024-11-06 13:54:08.380402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.014 [2024-11-06 13:54:08.380466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.014 [2024-11-06 13:54:08.380492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.014 [2024-11-06 13:54:08.380512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb8c000b90 00:29:45.014 [2024-11-06 13:54:08.380568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.014 qpair failed and we were unable to recover it. 00:29:45.276 [2024-11-06 13:54:08.390387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.276 [2024-11-06 13:54:08.390489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.276 [2024-11-06 13:54:08.390538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.276 [2024-11-06 13:54:08.390556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.276 [2024-11-06 13:54:08.390571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb8c000b90 00:29:45.276 [2024-11-06 13:54:08.390612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.276 qpair failed and we were unable to recover it. 00:29:45.276 [2024-11-06 13:54:08.400385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.276 [2024-11-06 13:54:08.400444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.276 [2024-11-06 13:54:08.400470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.276 [2024-11-06 13:54:08.400479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.276 [2024-11-06 13:54:08.400486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a510c0 00:29:45.276 [2024-11-06 13:54:08.400506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.276 qpair failed and we were unable to recover it. 00:29:45.276 [2024-11-06 13:54:08.410371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.276 [2024-11-06 13:54:08.410427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.276 [2024-11-06 13:54:08.410452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.276 [2024-11-06 13:54:08.410461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.276 [2024-11-06 13:54:08.410468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a510c0 00:29:45.276 [2024-11-06 13:54:08.410488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.276 qpair failed and we were unable to recover it. 00:29:45.276 [2024-11-06 13:54:08.420406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.276 [2024-11-06 13:54:08.420536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.276 [2024-11-06 13:54:08.420601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.276 [2024-11-06 13:54:08.420628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.276 [2024-11-06 13:54:08.420649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb80000b90 00:29:45.276 [2024-11-06 13:54:08.420705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.276 qpair failed and we were unable to recover it. 00:29:45.276 [2024-11-06 13:54:08.430437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.276 [2024-11-06 13:54:08.430522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.276 [2024-11-06 13:54:08.430570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.276 [2024-11-06 13:54:08.430589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.276 [2024-11-06 13:54:08.430605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbb80000b90 00:29:45.276 [2024-11-06 13:54:08.430646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.276 qpair failed and we were unable to recover it. 00:29:45.276 [2024-11-06 13:54:08.431230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a46e00 (9): Bad file descriptor 00:29:45.276 Initializing NVMe Controllers 00:29:45.276 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:45.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:45.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:45.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:45.276 Initialization complete. Launching workers. 00:29:45.276 Starting thread on core 1 00:29:45.276 Starting thread on core 2 00:29:45.276 Starting thread on core 3 00:29:45.276 Starting thread on core 0 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:45.276 00:29:45.276 real 0m11.483s 00:29:45.276 user 0m21.503s 00:29:45.276 sys 0m3.640s 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:45.276 ************************************ 00:29:45.276 END TEST nvmf_target_disconnect_tc2 00:29:45.276 ************************************ 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.276 rmmod nvme_tcp 00:29:45.276 rmmod nvme_fabrics 00:29:45.276 rmmod nvme_keyring 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 832270 ']' 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 832270 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 832270 ']' 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 832270 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 832270 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 832270' 00:29:45.276 killing process with pid 832270 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 832270 00:29:45.276 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 832270 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.537 13:54:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.452 13:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.452 00:29:47.452 real 0m21.784s 00:29:47.452 user 0m49.565s 00:29:47.452 sys 0m9.784s 00:29:47.452 13:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:47.452 13:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:47.452 ************************************ 00:29:47.452 END TEST nvmf_target_disconnect 00:29:47.452 ************************************ 00:29:47.712 13:54:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:47.712 00:29:47.712 real 6m30.796s 00:29:47.712 user 11m26.674s 00:29:47.712 sys 2m12.326s 00:29:47.712 13:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:47.712 13:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.712 ************************************ 00:29:47.712 END TEST nvmf_host 00:29:47.712 ************************************ 00:29:47.712 13:54:10 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:47.712 13:54:10 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:47.712 13:54:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:47.712 13:54:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:47.712 13:54:10 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:47.712 13:54:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.712 ************************************ 00:29:47.712 START TEST nvmf_target_core_interrupt_mode 00:29:47.712 ************************************ 00:29:47.712 13:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:47.712 * Looking for test storage... 00:29:47.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:47.712 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:47.712 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:47.712 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:47.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.974 --rc genhtml_branch_coverage=1 00:29:47.974 --rc genhtml_function_coverage=1 00:29:47.974 --rc genhtml_legend=1 00:29:47.974 --rc geninfo_all_blocks=1 00:29:47.974 --rc geninfo_unexecuted_blocks=1 00:29:47.974 00:29:47.974 ' 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:47.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.974 --rc genhtml_branch_coverage=1 00:29:47.974 --rc genhtml_function_coverage=1 00:29:47.974 --rc genhtml_legend=1 00:29:47.974 --rc geninfo_all_blocks=1 00:29:47.974 --rc geninfo_unexecuted_blocks=1 00:29:47.974 00:29:47.974 ' 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:47.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.974 --rc genhtml_branch_coverage=1 00:29:47.974 --rc genhtml_function_coverage=1 00:29:47.974 --rc genhtml_legend=1 00:29:47.974 --rc geninfo_all_blocks=1 00:29:47.974 --rc geninfo_unexecuted_blocks=1 00:29:47.974 00:29:47.974 ' 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:47.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.974 --rc genhtml_branch_coverage=1 00:29:47.974 --rc genhtml_function_coverage=1 00:29:47.974 --rc genhtml_legend=1 00:29:47.974 --rc geninfo_all_blocks=1 00:29:47.974 --rc geninfo_unexecuted_blocks=1 00:29:47.974 00:29:47.974 ' 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.974 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:47.975 ************************************ 00:29:47.975 START TEST nvmf_abort 00:29:47.975 ************************************ 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:47.975 * Looking for test storage... 00:29:47.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:47.975 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:48.237 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.238 --rc genhtml_branch_coverage=1 00:29:48.238 --rc genhtml_function_coverage=1 00:29:48.238 --rc genhtml_legend=1 00:29:48.238 --rc geninfo_all_blocks=1 00:29:48.238 --rc geninfo_unexecuted_blocks=1 00:29:48.238 00:29:48.238 ' 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.238 --rc genhtml_branch_coverage=1 00:29:48.238 --rc genhtml_function_coverage=1 00:29:48.238 --rc genhtml_legend=1 00:29:48.238 --rc geninfo_all_blocks=1 00:29:48.238 --rc geninfo_unexecuted_blocks=1 00:29:48.238 00:29:48.238 ' 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.238 --rc genhtml_branch_coverage=1 00:29:48.238 --rc genhtml_function_coverage=1 00:29:48.238 --rc genhtml_legend=1 00:29:48.238 --rc geninfo_all_blocks=1 00:29:48.238 --rc geninfo_unexecuted_blocks=1 00:29:48.238 00:29:48.238 ' 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.238 --rc genhtml_branch_coverage=1 00:29:48.238 --rc genhtml_function_coverage=1 00:29:48.238 --rc genhtml_legend=1 00:29:48.238 --rc geninfo_all_blocks=1 00:29:48.238 --rc geninfo_unexecuted_blocks=1 00:29:48.238 00:29:48.238 ' 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:48.238 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.239 13:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:54.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:54.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.992 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:54.993 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:54.993 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.993 13:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:29:54.993 00:29:54.993 --- 10.0.0.2 ping statistics --- 00:29:54.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.993 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:29:54.993 00:29:54.993 --- 10.0.0.1 ping statistics --- 00:29:54.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.993 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=838155 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 838155 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 838155 ']' 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:54.993 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:54.993 [2024-11-06 13:54:18.157694] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:54.993 [2024-11-06 13:54:18.158866] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:29:54.993 [2024-11-06 13:54:18.158917] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.993 [2024-11-06 13:54:18.257875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:54.993 [2024-11-06 13:54:18.308557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.993 [2024-11-06 13:54:18.308611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.993 [2024-11-06 13:54:18.308620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.993 [2024-11-06 13:54:18.308627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.993 [2024-11-06 13:54:18.308633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.993 [2024-11-06 13:54:18.310418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.993 [2024-11-06 13:54:18.310585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.993 [2024-11-06 13:54:18.310586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:55.254 [2024-11-06 13:54:18.388501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:55.254 [2024-11-06 13:54:18.388569] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:55.254 [2024-11-06 13:54:18.389274] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:55.254 [2024-11-06 13:54:18.389523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:55.826 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:55.826 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:29:55.826 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:55.826 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:55.826 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.826 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.826 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:55.826 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.826 13:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.826 [2024-11-06 13:54:19.003475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.826 Malloc0 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.826 Delay0 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.826 [2024-11-06 13:54:19.111428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.826 13:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:56.086 [2024-11-06 13:54:19.276889] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:57.996 Initializing NVMe Controllers 00:29:57.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:57.996 controller IO queue size 128 less than required 00:29:57.996 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:57.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:57.996 Initialization complete. Launching workers. 00:29:57.996 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29124 00:29:57.996 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29181, failed to submit 66 00:29:57.996 success 29124, unsuccessful 57, failed 0 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:57.996 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:57.996 rmmod nvme_tcp 00:29:57.996 rmmod nvme_fabrics 00:29:58.257 rmmod nvme_keyring 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 838155 ']' 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 838155 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 838155 ']' 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 838155 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 838155 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 838155' 00:29:58.257 killing process with pid 838155 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 838155 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 838155 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:58.257 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:58.518 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.518 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:58.518 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.518 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.518 13:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.428 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:00.428 00:30:00.428 real 0m12.506s 00:30:00.428 user 0m10.629s 00:30:00.428 sys 0m6.307s 00:30:00.428 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:00.428 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.428 ************************************ 00:30:00.428 END TEST nvmf_abort 00:30:00.428 ************************************ 00:30:00.428 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:00.428 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:00.428 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:00.428 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:00.428 ************************************ 00:30:00.428 START TEST nvmf_ns_hotplug_stress 00:30:00.428 ************************************ 00:30:00.428 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:00.690 * Looking for test storage... 00:30:00.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:00.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.690 --rc genhtml_branch_coverage=1 00:30:00.690 --rc genhtml_function_coverage=1 00:30:00.690 --rc genhtml_legend=1 00:30:00.690 --rc geninfo_all_blocks=1 00:30:00.690 --rc geninfo_unexecuted_blocks=1 00:30:00.690 00:30:00.690 ' 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:00.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.690 --rc genhtml_branch_coverage=1 00:30:00.690 --rc genhtml_function_coverage=1 00:30:00.690 --rc genhtml_legend=1 00:30:00.690 --rc geninfo_all_blocks=1 00:30:00.690 --rc geninfo_unexecuted_blocks=1 00:30:00.690 00:30:00.690 ' 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:00.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.690 --rc genhtml_branch_coverage=1 00:30:00.690 --rc genhtml_function_coverage=1 00:30:00.690 --rc genhtml_legend=1 00:30:00.690 --rc geninfo_all_blocks=1 00:30:00.690 --rc geninfo_unexecuted_blocks=1 00:30:00.690 00:30:00.690 ' 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:00.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.690 --rc genhtml_branch_coverage=1 00:30:00.690 --rc genhtml_function_coverage=1 00:30:00.690 --rc genhtml_legend=1 00:30:00.690 --rc geninfo_all_blocks=1 00:30:00.690 --rc geninfo_unexecuted_blocks=1 00:30:00.690 00:30:00.690 ' 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.690 13:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.690 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.690 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:00.690 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:00.690 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.690 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.690 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.690 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.690 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:00.691 13:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:08.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:08.829 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.829 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:08.830 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:08.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:30:08.830 00:30:08.830 --- 10.0.0.2 ping statistics --- 00:30:08.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.830 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:30:08.830 00:30:08.830 --- 10.0.0.1 ping statistics --- 00:30:08.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.830 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=842925 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 842925 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 842925 ']' 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:08.830 13:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:08.830 [2024-11-06 13:54:31.512811] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:08.830 [2024-11-06 13:54:31.513775] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:30:08.830 [2024-11-06 13:54:31.513814] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.830 [2024-11-06 13:54:31.608289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:08.830 [2024-11-06 13:54:31.645976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.831 [2024-11-06 13:54:31.646013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.831 [2024-11-06 13:54:31.646022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.831 [2024-11-06 13:54:31.646029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.831 [2024-11-06 13:54:31.646035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.831 [2024-11-06 13:54:31.647412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.831 [2024-11-06 13:54:31.647569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.831 [2024-11-06 13:54:31.647570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.831 [2024-11-06 13:54:31.707642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:08.831 [2024-11-06 13:54:31.707683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:08.831 [2024-11-06 13:54:31.708242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:08.831 [2024-11-06 13:54:31.708583] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:09.091 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:09.091 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:09.091 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:09.091 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:09.091 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:09.091 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.091 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:09.091 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:09.352 [2024-11-06 13:54:32.480336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.352 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:09.352 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.612 [2024-11-06 13:54:32.841085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.612 13:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:09.874 13:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:09.874 Malloc0 00:30:10.134 13:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:10.134 Delay0 00:30:10.134 13:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.395 13:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:10.656 NULL1 00:30:10.656 13:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:10.656 13:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=843537 00:30:10.656 13:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:10.656 13:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.656 13:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:10.916 13:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.176 13:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:11.176 13:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:11.176 true 00:30:11.176 13:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:11.176 13:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.436 13:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.697 13:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:11.697 13:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:11.697 true 00:30:11.697 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:11.697 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.958 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.218 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:12.218 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:12.218 true 00:30:12.478 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:12.478 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.478 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.739 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:12.739 13:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:12.999 true 00:30:12.999 13:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:12.999 13:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.999 13:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.259 13:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:13.260 13:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:13.520 true 00:30:13.520 13:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:13.520 13:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.780 13:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.780 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:13.780 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:14.040 true 00:30:14.040 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:14.040 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.299 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.299 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:14.299 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:14.558 true 00:30:14.558 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:14.558 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.819 13:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.819 13:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:14.819 13:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:15.112 true 00:30:15.112 13:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:15.112 13:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.373 13:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.373 13:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:15.373 13:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:15.634 true 00:30:15.634 13:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:15.634 13:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.894 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.155 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:16.155 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:16.155 true 00:30:16.155 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:16.155 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.415 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.676 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:16.676 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:16.676 true 00:30:16.676 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:16.676 13:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.937 13:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.197 13:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:17.197 13:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:17.197 true 00:30:17.197 13:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:17.197 13:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.457 13:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.717 13:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:17.717 13:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:17.717 true 00:30:17.717 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:17.717 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.978 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.237 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:18.237 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:18.237 true 00:30:18.496 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:18.496 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.496 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.756 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:18.756 13:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:19.016 true 00:30:19.016 13:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:19.016 13:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.016 13:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.276 13:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:19.276 13:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:19.536 true 00:30:19.536 13:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:19.536 13:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.536 13:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.795 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:19.796 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:20.055 true 00:30:20.055 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:20.055 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.055 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.315 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:20.315 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:20.575 true 00:30:20.575 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:20.575 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.575 13:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.835 13:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:20.835 13:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:21.096 true 00:30:21.096 13:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:21.096 13:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.096 13:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.356 13:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:21.356 13:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:21.616 true 00:30:21.616 13:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:21.616 13:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.878 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.878 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:21.878 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:22.138 true 00:30:22.138 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:22.138 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.399 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.399 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:22.399 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:22.660 true 00:30:22.660 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:22.660 13:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.921 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.921 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:22.921 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:23.182 true 00:30:23.182 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:23.182 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.443 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.443 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:23.443 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:23.705 true 00:30:23.705 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:23.705 13:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.966 13:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.966 13:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:23.966 13:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:24.228 true 00:30:24.228 13:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:24.228 13:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.488 13:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.488 13:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:24.488 13:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:24.748 true 00:30:24.748 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:24.748 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.009 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.269 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:25.269 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:25.269 true 00:30:25.269 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:25.269 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.530 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.790 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:25.790 13:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:25.790 true 00:30:25.790 13:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:25.790 13:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.051 13:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.353 13:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:26.353 13:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:26.353 true 00:30:26.353 13:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:26.353 13:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.613 13:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.873 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:26.873 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:26.873 true 00:30:26.873 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:26.873 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.133 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.394 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:27.394 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:27.394 true 00:30:27.394 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:27.394 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.654 13:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.915 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:27.915 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:27.915 true 00:30:27.915 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:27.915 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.175 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.434 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:28.435 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:28.435 true 00:30:28.435 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:28.435 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.694 13:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.954 13:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:28.954 13:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:28.954 true 00:30:29.213 13:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:29.213 13:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.214 13:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.474 13:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:29.475 13:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:29.735 true 00:30:29.735 13:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:29.735 13:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.735 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.995 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:29.995 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:30.255 true 00:30:30.255 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:30.255 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.255 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.516 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:30.516 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:30.776 true 00:30:30.776 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:30.776 13:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.776 13:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.037 13:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:31.037 13:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:31.297 true 00:30:31.297 13:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:31.297 13:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.297 13:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.557 13:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:31.557 13:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:31.818 true 00:30:31.818 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:31.818 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.078 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.078 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:32.078 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:32.338 true 00:30:32.338 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:32.338 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.597 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.598 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:32.598 13:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:32.858 true 00:30:32.858 13:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:32.858 13:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.118 13:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.118 13:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:33.118 13:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:33.378 true 00:30:33.378 13:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:33.378 13:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.638 13:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.638 13:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:33.638 13:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:33.898 true 00:30:33.898 13:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:33.898 13:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.158 13:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.418 13:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:34.418 13:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:34.418 true 00:30:34.418 13:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:34.418 13:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.680 13:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.940 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:34.940 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:34.940 true 00:30:34.940 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:34.940 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.200 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.461 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:35.461 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:35.461 true 00:30:35.461 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:35.461 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.722 13:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.982 13:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:35.983 13:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:35.983 true 00:30:35.983 13:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:35.983 13:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.243 13:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.503 13:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:36.503 13:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:36.503 true 00:30:36.503 13:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:36.503 13:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.765 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.027 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:37.027 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:37.027 true 00:30:37.287 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:37.287 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.287 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.571 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:37.571 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:37.571 true 00:30:37.571 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:37.571 13:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.831 13:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.091 13:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:38.091 13:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:38.350 true 00:30:38.350 13:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:38.350 13:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.350 13:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.610 13:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:38.610 13:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:38.870 true 00:30:38.870 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:38.871 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.871 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.131 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:39.131 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:39.390 true 00:30:39.390 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:39.390 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.390 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.650 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:39.650 13:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:39.909 true 00:30:39.909 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:39.909 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.909 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.168 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:40.168 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:40.428 true 00:30:40.428 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:40.428 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.689 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.689 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:30:40.689 13:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:30:40.949 true 00:30:40.949 13:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:40.949 13:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.949 Initializing NVMe Controllers 00:30:40.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.950 Controller IO queue size 128, less than required. 00:30:40.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:40.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:40.950 Initialization complete. Launching workers. 00:30:40.950 ======================================================== 00:30:40.950 Latency(us) 00:30:40.950 Device Information : IOPS MiB/s Average min max 00:30:40.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29826.36 14.56 4291.47 1452.14 11185.17 00:30:40.950 ======================================================== 00:30:40.950 Total : 29826.36 14.56 4291.47 1452.14 11185.17 00:30:40.950 00:30:41.210 13:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.210 13:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:30:41.210 13:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:30:41.470 true 00:30:41.470 13:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 843537 00:30:41.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (843537) - No such process 00:30:41.470 13:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 843537 00:30:41.470 13:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.730 13:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:41.730 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:41.730 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:41.730 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:41.730 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:41.730 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:41.991 null0 00:30:41.991 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:41.991 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:41.991 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:42.251 null1 00:30:42.251 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:42.251 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:42.251 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:42.251 null2 00:30:42.251 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:42.251 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:42.251 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:42.513 null3 00:30:42.513 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:42.513 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:42.513 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:42.774 null4 00:30:42.774 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:42.774 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:42.774 13:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:42.774 null5 00:30:42.774 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:42.774 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:42.774 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:43.036 null6 00:30:43.036 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:43.036 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:43.036 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:43.036 null7 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:43.298 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 849714 849715 849718 849719 849721 849723 849725 849728 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:43.299 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.561 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.562 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:43.822 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.822 13:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:43.822 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:44.083 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:44.342 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.342 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.343 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.603 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:44.864 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.864 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.864 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.864 13:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:44.864 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:45.125 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.385 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:45.646 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.646 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:45.646 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.646 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.646 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:45.646 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.646 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.646 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.647 13:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:45.907 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.167 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:46.168 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.433 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.434 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:46.699 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.699 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.699 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.699 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:46.700 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:46.700 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.700 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:46.700 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.700 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.700 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:46.700 13:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:46.700 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:46.700 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.700 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:46.959 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:46.959 rmmod nvme_tcp 00:30:47.218 rmmod nvme_fabrics 00:30:47.218 rmmod nvme_keyring 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 842925 ']' 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 842925 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 842925 ']' 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 842925 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 842925 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 842925' 00:30:47.218 killing process with pid 842925 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 842925 00:30:47.218 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 842925 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.478 13:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.385 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:49.385 00:30:49.385 real 0m48.891s 00:30:49.385 user 3m3.359s 00:30:49.385 sys 0m22.288s 00:30:49.385 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:49.385 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:49.385 ************************************ 00:30:49.385 END TEST nvmf_ns_hotplug_stress 00:30:49.385 ************************************ 00:30:49.385 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:49.386 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:49.386 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:49.386 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:49.648 ************************************ 00:30:49.648 START TEST nvmf_delete_subsystem 00:30:49.648 ************************************ 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:49.648 * Looking for test storage... 00:30:49.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:49.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.648 --rc genhtml_branch_coverage=1 00:30:49.648 --rc genhtml_function_coverage=1 00:30:49.648 --rc genhtml_legend=1 00:30:49.648 --rc geninfo_all_blocks=1 00:30:49.648 --rc geninfo_unexecuted_blocks=1 00:30:49.648 00:30:49.648 ' 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:49.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.648 --rc genhtml_branch_coverage=1 00:30:49.648 --rc genhtml_function_coverage=1 00:30:49.648 --rc genhtml_legend=1 00:30:49.648 --rc geninfo_all_blocks=1 00:30:49.648 --rc geninfo_unexecuted_blocks=1 00:30:49.648 00:30:49.648 ' 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:49.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.648 --rc genhtml_branch_coverage=1 00:30:49.648 --rc genhtml_function_coverage=1 00:30:49.648 --rc genhtml_legend=1 00:30:49.648 --rc geninfo_all_blocks=1 00:30:49.648 --rc geninfo_unexecuted_blocks=1 00:30:49.648 00:30:49.648 ' 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:49.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.648 --rc genhtml_branch_coverage=1 00:30:49.648 --rc genhtml_function_coverage=1 00:30:49.648 --rc genhtml_legend=1 00:30:49.648 --rc geninfo_all_blocks=1 00:30:49.648 --rc geninfo_unexecuted_blocks=1 00:30:49.648 00:30:49.648 ' 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.648 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:49.649 13:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:57.795 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.795 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:57.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:57.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:57.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:57.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:30:57.796 00:30:57.796 --- 10.0.0.2 ping statistics --- 00:30:57.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.796 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:30:57.796 00:30:57.796 --- 10.0.0.1 ping statistics --- 00:30:57.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.796 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=854867 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 854867 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 854867 ']' 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:57.796 13:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:57.796 [2024-11-06 13:55:20.496315] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:57.796 [2024-11-06 13:55:20.497457] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:30:57.796 [2024-11-06 13:55:20.497508] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.796 [2024-11-06 13:55:20.580901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:57.796 [2024-11-06 13:55:20.622086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.797 [2024-11-06 13:55:20.622125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.797 [2024-11-06 13:55:20.622134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.797 [2024-11-06 13:55:20.622141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.797 [2024-11-06 13:55:20.622147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.797 [2024-11-06 13:55:20.623481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.797 [2024-11-06 13:55:20.623484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.797 [2024-11-06 13:55:20.679920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:57.797 [2024-11-06 13:55:20.680391] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:57.797 [2024-11-06 13:55:20.680743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:58.058 [2024-11-06 13:55:21.348074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:58.058 [2024-11-06 13:55:21.376537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:58.058 NULL1 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:58.058 Delay0 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=855069 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:58.058 13:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:58.318 [2024-11-06 13:55:21.473187] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:00.233 13:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.233 13:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.233 13:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 starting I/O failed: -6 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 starting I/O failed: -6 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 starting I/O failed: -6 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 starting I/O failed: -6 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 starting I/O failed: -6 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 starting I/O failed: -6 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Write completed with error (sct=0, sc=8) 00:31:00.233 Read completed with error (sct=0, sc=8) 00:31:00.233 starting I/O failed: -6 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 [2024-11-06 13:55:23.594591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408680 is same with the state(6) to be set 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 starting I/O failed: -6 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 [2024-11-06 13:55:23.599027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f615c000c40 is same with the state(6) to be set 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Write completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.234 Read completed with error (sct=0, sc=8) 00:31:00.235 Read completed with error (sct=0, sc=8) 00:31:00.235 Read completed with error (sct=0, sc=8) 00:31:00.235 Write completed with error (sct=0, sc=8) 00:31:01.615 [2024-11-06 13:55:24.572932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14099a0 is same with the state(6) to be set 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 [2024-11-06 13:55:24.598043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408860 is same with the state(6) to be set 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 [2024-11-06 13:55:24.598358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14084a0 is same with the state(6) to be set 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 [2024-11-06 13:55:24.600875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f615c00d020 is same with the state(6) to be set 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Write completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 Read completed with error (sct=0, sc=8) 00:31:01.615 [2024-11-06 13:55:24.600960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f615c00d7c0 is same with the state(6) to be set 00:31:01.615 Initializing NVMe Controllers 00:31:01.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.615 Controller IO queue size 128, less than required. 00:31:01.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:01.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:01.616 Initialization complete. Launching workers. 00:31:01.616 ======================================================== 00:31:01.616 Latency(us) 00:31:01.616 Device Information : IOPS MiB/s Average min max 00:31:01.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.78 0.08 899556.81 233.42 1006841.36 00:31:01.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.29 0.08 907671.50 308.26 1010737.23 00:31:01.616 ======================================================== 00:31:01.616 Total : 332.07 0.16 903571.58 233.42 1010737.23 00:31:01.616 00:31:01.616 [2024-11-06 13:55:24.601481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14099a0 (9): Bad file descriptor 00:31:01.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:01.616 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.616 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:01.616 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 855069 00:31:01.616 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 855069 00:31:01.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (855069) - No such process 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 855069 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 855069 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 855069 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:01.876 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:01.877 [2024-11-06 13:55:25.136464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=855803 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855803 00:31:01.877 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:01.877 [2024-11-06 13:55:25.208317] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:02.446 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:02.446 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855803 00:31:02.446 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:03.016 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:03.016 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855803 00:31:03.016 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:03.587 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:03.587 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855803 00:31:03.587 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:03.848 13:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:03.848 13:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855803 00:31:03.848 13:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:04.417 13:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:04.417 13:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855803 00:31:04.417 13:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:04.986 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:04.986 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855803 00:31:04.986 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:04.986 Initializing NVMe Controllers 00:31:04.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.986 Controller IO queue size 128, less than required. 00:31:04.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:04.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:04.986 Initialization complete. Launching workers. 00:31:04.986 ======================================================== 00:31:04.986 Latency(us) 00:31:04.986 Device Information : IOPS MiB/s Average min max 00:31:04.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002666.17 1000164.36 1042134.09 00:31:04.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004054.25 1000399.90 1010353.05 00:31:04.986 ======================================================== 00:31:04.986 Total : 256.00 0.12 1003360.21 1000164.36 1042134.09 00:31:04.986 00:31:05.557 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:05.557 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855803 00:31:05.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (855803) - No such process 00:31:05.557 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 855803 00:31:05.557 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.558 rmmod nvme_tcp 00:31:05.558 rmmod nvme_fabrics 00:31:05.558 rmmod nvme_keyring 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 854867 ']' 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 854867 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 854867 ']' 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 854867 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 854867 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 854867' 00:31:05.558 killing process with pid 854867 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 854867 00:31:05.558 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 854867 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.818 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.729 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:07.729 00:31:07.729 real 0m18.261s 00:31:07.729 user 0m26.284s 00:31:07.729 sys 0m7.469s 00:31:07.729 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:07.729 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:07.729 ************************************ 00:31:07.729 END TEST nvmf_delete_subsystem 00:31:07.729 ************************************ 00:31:07.729 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:07.729 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:07.729 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:07.729 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.013 ************************************ 00:31:08.013 START TEST nvmf_host_management 00:31:08.013 ************************************ 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:08.013 * Looking for test storage... 00:31:08.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:08.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.013 --rc genhtml_branch_coverage=1 00:31:08.013 --rc genhtml_function_coverage=1 00:31:08.013 --rc genhtml_legend=1 00:31:08.013 --rc geninfo_all_blocks=1 00:31:08.013 --rc geninfo_unexecuted_blocks=1 00:31:08.013 00:31:08.013 ' 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:08.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.013 --rc genhtml_branch_coverage=1 00:31:08.013 --rc genhtml_function_coverage=1 00:31:08.013 --rc genhtml_legend=1 00:31:08.013 --rc geninfo_all_blocks=1 00:31:08.013 --rc geninfo_unexecuted_blocks=1 00:31:08.013 00:31:08.013 ' 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:08.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.013 --rc genhtml_branch_coverage=1 00:31:08.013 --rc genhtml_function_coverage=1 00:31:08.013 --rc genhtml_legend=1 00:31:08.013 --rc geninfo_all_blocks=1 00:31:08.013 --rc geninfo_unexecuted_blocks=1 00:31:08.013 00:31:08.013 ' 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:08.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.013 --rc genhtml_branch_coverage=1 00:31:08.013 --rc genhtml_function_coverage=1 00:31:08.013 --rc genhtml_legend=1 00:31:08.013 --rc geninfo_all_blocks=1 00:31:08.013 --rc geninfo_unexecuted_blocks=1 00:31:08.013 00:31:08.013 ' 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.013 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.014 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:16.271 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:16.271 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:16.271 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:16.271 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.271 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:31:16.272 00:31:16.272 --- 10.0.0.2 ping statistics --- 00:31:16.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.272 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:31:16.272 00:31:16.272 --- 10.0.0.1 ping statistics --- 00:31:16.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.272 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=860576 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 860576 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 860576 ']' 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:16.272 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.272 [2024-11-06 13:55:38.617883] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:16.272 [2024-11-06 13:55:38.618991] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:31:16.272 [2024-11-06 13:55:38.619047] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.272 [2024-11-06 13:55:38.724393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.272 [2024-11-06 13:55:38.776639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.272 [2024-11-06 13:55:38.776696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.272 [2024-11-06 13:55:38.776704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.272 [2024-11-06 13:55:38.776711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.272 [2024-11-06 13:55:38.776717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.272 [2024-11-06 13:55:38.779076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.272 [2024-11-06 13:55:38.779243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.272 [2024-11-06 13:55:38.779411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.272 [2024-11-06 13:55:38.779411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:16.272 [2024-11-06 13:55:38.856124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:16.272 [2024-11-06 13:55:38.856828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:16.272 [2024-11-06 13:55:38.857381] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:16.272 [2024-11-06 13:55:38.857784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:16.272 [2024-11-06 13:55:38.857837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.272 [2024-11-06 13:55:39.452263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:16.272 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.273 Malloc0 00:31:16.273 [2024-11-06 13:55:39.544524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=860798 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 860798 /var/tmp/bdevperf.sock 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 860798 ']' 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:16.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:16.273 { 00:31:16.273 "params": { 00:31:16.273 "name": "Nvme$subsystem", 00:31:16.273 "trtype": "$TEST_TRANSPORT", 00:31:16.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.273 "adrfam": "ipv4", 00:31:16.273 "trsvcid": "$NVMF_PORT", 00:31:16.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.273 "hdgst": ${hdgst:-false}, 00:31:16.273 "ddgst": ${ddgst:-false} 00:31:16.273 }, 00:31:16.273 "method": "bdev_nvme_attach_controller" 00:31:16.273 } 00:31:16.273 EOF 00:31:16.273 )") 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:16.273 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:16.273 "params": { 00:31:16.273 "name": "Nvme0", 00:31:16.273 "trtype": "tcp", 00:31:16.273 "traddr": "10.0.0.2", 00:31:16.273 "adrfam": "ipv4", 00:31:16.273 "trsvcid": "4420", 00:31:16.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:16.273 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:16.273 "hdgst": false, 00:31:16.273 "ddgst": false 00:31:16.273 }, 00:31:16.273 "method": "bdev_nvme_attach_controller" 00:31:16.273 }' 00:31:16.533 [2024-11-06 13:55:39.650240] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:31:16.533 [2024-11-06 13:55:39.650301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860798 ] 00:31:16.533 [2024-11-06 13:55:39.721897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.533 [2024-11-06 13:55:39.758301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.792 Running I/O for 10 seconds... 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=656 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 656 -ge 100 ']' 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.364 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.364 [2024-11-06 13:55:40.507996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.364 [2024-11-06 13:55:40.508167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.508435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aee2a0 is same with the state(6) to be set 00:31:17.365 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.365 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:17.365 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.365 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.365 [2024-11-06 13:55:40.517052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.365 [2024-11-06 13:55:40.517086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.365 [2024-11-06 13:55:40.517104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.365 [2024-11-06 13:55:40.517124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.365 [2024-11-06 13:55:40.517140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7000 is same with the state(6) to be set 00:31:17.365 [2024-11-06 13:55:40.517207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.365 [2024-11-06 13:55:40.517410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.365 [2024-11-06 13:55:40.517419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.517984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.517991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.518000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.518008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.518017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.518024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.518035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.518043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.518053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.518060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.518069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.366 [2024-11-06 13:55:40.518076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.366 [2024-11-06 13:55:40.518086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.518286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.367 [2024-11-06 13:55:40.518294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.367 [2024-11-06 13:55:40.519539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:17.367 task offset: 95488 on job bdev=Nvme0n1 fails 00:31:17.367 00:31:17.367 Latency(us) 00:31:17.367 [2024-11-06T12:55:40.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.367 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.367 Job: Nvme0n1 ended in about 0.49 seconds with error 00:31:17.367 Verification LBA range: start 0x0 length 0x400 00:31:17.367 Nvme0n1 : 0.49 1515.78 94.74 130.04 0.00 37851.48 1638.40 37792.43 00:31:17.367 [2024-11-06T12:55:40.743Z] =================================================================================================================== 00:31:17.367 [2024-11-06T12:55:40.743Z] Total : 1515.78 94.74 130.04 0.00 37851.48 1638.40 37792.43 00:31:17.367 [2024-11-06 13:55:40.521526] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:17.367 [2024-11-06 13:55:40.521548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f7000 (9): Bad file descriptor 00:31:17.367 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.367 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:17.367 [2024-11-06 13:55:40.568981] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 860798 00:31:18.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (860798) - No such process 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:18.307 { 00:31:18.307 "params": { 00:31:18.307 "name": "Nvme$subsystem", 00:31:18.307 "trtype": "$TEST_TRANSPORT", 00:31:18.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.307 "adrfam": "ipv4", 00:31:18.307 "trsvcid": "$NVMF_PORT", 00:31:18.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.307 "hdgst": ${hdgst:-false}, 00:31:18.307 "ddgst": ${ddgst:-false} 00:31:18.307 }, 00:31:18.307 "method": "bdev_nvme_attach_controller" 00:31:18.307 } 00:31:18.307 EOF 00:31:18.307 )") 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:18.307 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:18.307 "params": { 00:31:18.307 "name": "Nvme0", 00:31:18.307 "trtype": "tcp", 00:31:18.307 "traddr": "10.0.0.2", 00:31:18.307 "adrfam": "ipv4", 00:31:18.307 "trsvcid": "4420", 00:31:18.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:18.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:18.307 "hdgst": false, 00:31:18.307 "ddgst": false 00:31:18.307 }, 00:31:18.307 "method": "bdev_nvme_attach_controller" 00:31:18.307 }' 00:31:18.307 [2024-11-06 13:55:41.585173] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:31:18.307 [2024-11-06 13:55:41.585231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861201 ] 00:31:18.307 [2024-11-06 13:55:41.655047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.568 [2024-11-06 13:55:41.690361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.568 Running I/O for 1 seconds... 00:31:19.508 1617.00 IOPS, 101.06 MiB/s 00:31:19.508 Latency(us) 00:31:19.508 [2024-11-06T12:55:42.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.508 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:19.508 Verification LBA range: start 0x0 length 0x400 00:31:19.508 Nvme0n1 : 1.01 1663.10 103.94 0.00 0.00 37763.39 2211.84 35607.89 00:31:19.508 [2024-11-06T12:55:42.884Z] =================================================================================================================== 00:31:19.508 [2024-11-06T12:55:42.884Z] Total : 1663.10 103.94 0.00 0.00 37763.39 2211.84 35607.89 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:19.768 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:19.768 rmmod nvme_tcp 00:31:19.768 rmmod nvme_fabrics 00:31:19.768 rmmod nvme_keyring 00:31:19.768 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 860576 ']' 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 860576 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 860576 ']' 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 860576 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 860576 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 860576' 00:31:19.769 killing process with pid 860576 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 860576 00:31:19.769 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 860576 00:31:20.030 [2024-11-06 13:55:43.219242] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.030 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:22.576 00:31:22.576 real 0m14.233s 00:31:22.576 user 0m18.555s 00:31:22.576 sys 0m7.246s 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.576 ************************************ 00:31:22.576 END TEST nvmf_host_management 00:31:22.576 ************************************ 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:22.576 ************************************ 00:31:22.576 START TEST nvmf_lvol 00:31:22.576 ************************************ 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:22.576 * Looking for test storage... 00:31:22.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:22.576 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:22.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.577 --rc genhtml_branch_coverage=1 00:31:22.577 --rc genhtml_function_coverage=1 00:31:22.577 --rc genhtml_legend=1 00:31:22.577 --rc geninfo_all_blocks=1 00:31:22.577 --rc geninfo_unexecuted_blocks=1 00:31:22.577 00:31:22.577 ' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:22.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.577 --rc genhtml_branch_coverage=1 00:31:22.577 --rc genhtml_function_coverage=1 00:31:22.577 --rc genhtml_legend=1 00:31:22.577 --rc geninfo_all_blocks=1 00:31:22.577 --rc geninfo_unexecuted_blocks=1 00:31:22.577 00:31:22.577 ' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:22.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.577 --rc genhtml_branch_coverage=1 00:31:22.577 --rc genhtml_function_coverage=1 00:31:22.577 --rc genhtml_legend=1 00:31:22.577 --rc geninfo_all_blocks=1 00:31:22.577 --rc geninfo_unexecuted_blocks=1 00:31:22.577 00:31:22.577 ' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:22.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.577 --rc genhtml_branch_coverage=1 00:31:22.577 --rc genhtml_function_coverage=1 00:31:22.577 --rc genhtml_legend=1 00:31:22.577 --rc geninfo_all_blocks=1 00:31:22.577 --rc geninfo_unexecuted_blocks=1 00:31:22.577 00:31:22.577 ' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:22.577 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.578 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:30.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:30.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:30.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:30.718 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:30.718 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:31:30.719 00:31:30.719 --- 10.0.0.2 ping statistics --- 00:31:30.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.719 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:31:30.719 00:31:30.719 --- 10.0.0.1 ping statistics --- 00:31:30.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.719 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=865632 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 865632 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 865632 ']' 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:30.719 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:30.719 [2024-11-06 13:55:53.031543] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:30.719 [2024-11-06 13:55:53.032694] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:31:30.719 [2024-11-06 13:55:53.032744] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.719 [2024-11-06 13:55:53.115571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:30.719 [2024-11-06 13:55:53.156416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.719 [2024-11-06 13:55:53.156451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.719 [2024-11-06 13:55:53.156459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.719 [2024-11-06 13:55:53.156466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.719 [2024-11-06 13:55:53.156471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.719 [2024-11-06 13:55:53.158054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.719 [2024-11-06 13:55:53.158171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.719 [2024-11-06 13:55:53.158174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.719 [2024-11-06 13:55:53.214140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:30.719 [2024-11-06 13:55:53.214543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:30.719 [2024-11-06 13:55:53.214922] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:30.719 [2024-11-06 13:55:53.215202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:30.719 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:30.719 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:30.719 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:30.719 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.719 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:30.719 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.719 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:30.719 [2024-11-06 13:55:54.038808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.719 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:30.981 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:30.981 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.242 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:31.242 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:31.501 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:31.501 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=42444a1e-3ef0-4853-8f8e-a2590fa66cf4 00:31:31.501 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42444a1e-3ef0-4853-8f8e-a2590fa66cf4 lvol 20 00:31:31.761 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6d261fa6-157e-4a65-86f9-e0753cc3dbb9 00:31:31.762 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:32.021 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d261fa6-157e-4a65-86f9-e0753cc3dbb9 00:31:32.021 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.281 [2024-11-06 13:55:55.498865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.281 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:32.542 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=866089 00:31:32.542 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:32.543 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:33.484 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6d261fa6-157e-4a65-86f9-e0753cc3dbb9 MY_SNAPSHOT 00:31:33.744 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c841cd7d-96bd-47f0-934b-ed631f18a2b1 00:31:33.744 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6d261fa6-157e-4a65-86f9-e0753cc3dbb9 30 00:31:34.005 13:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c841cd7d-96bd-47f0-934b-ed631f18a2b1 MY_CLONE 00:31:34.005 13:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4282755d-71d2-4e09-b5a4-13c42b012b50 00:31:34.006 13:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4282755d-71d2-4e09-b5a4-13c42b012b50 00:31:34.576 13:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 866089 00:31:44.573 Initializing NVMe Controllers 00:31:44.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:44.573 Controller IO queue size 128, less than required. 00:31:44.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:44.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:44.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:44.573 Initialization complete. Launching workers. 00:31:44.573 ======================================================== 00:31:44.573 Latency(us) 00:31:44.573 Device Information : IOPS MiB/s Average min max 00:31:44.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12414.81 48.50 10315.68 1621.89 54514.16 00:31:44.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15779.48 61.64 8111.43 3885.36 55077.92 00:31:44.574 ======================================================== 00:31:44.574 Total : 28194.29 110.13 9082.03 1621.89 55077.92 00:31:44.574 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6d261fa6-157e-4a65-86f9-e0753cc3dbb9 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42444a1e-3ef0-4853-8f8e-a2590fa66cf4 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:44.574 rmmod nvme_tcp 00:31:44.574 rmmod nvme_fabrics 00:31:44.574 rmmod nvme_keyring 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 865632 ']' 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 865632 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 865632 ']' 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 865632 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 865632 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 865632' 00:31:44.574 killing process with pid 865632 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 865632 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 865632 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.574 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.959 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.959 00:31:45.959 real 0m23.580s 00:31:45.959 user 0m55.742s 00:31:45.959 sys 0m10.442s 00:31:45.959 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:45.959 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:45.959 ************************************ 00:31:45.959 END TEST nvmf_lvol 00:31:45.959 ************************************ 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:45.959 ************************************ 00:31:45.959 START TEST nvmf_lvs_grow 00:31:45.959 ************************************ 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:45.959 * Looking for test storage... 00:31:45.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:45.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.959 --rc genhtml_branch_coverage=1 00:31:45.959 --rc genhtml_function_coverage=1 00:31:45.959 --rc genhtml_legend=1 00:31:45.959 --rc geninfo_all_blocks=1 00:31:45.959 --rc geninfo_unexecuted_blocks=1 00:31:45.959 00:31:45.959 ' 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:45.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.959 --rc genhtml_branch_coverage=1 00:31:45.959 --rc genhtml_function_coverage=1 00:31:45.959 --rc genhtml_legend=1 00:31:45.959 --rc geninfo_all_blocks=1 00:31:45.959 --rc geninfo_unexecuted_blocks=1 00:31:45.959 00:31:45.959 ' 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:45.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.959 --rc genhtml_branch_coverage=1 00:31:45.959 --rc genhtml_function_coverage=1 00:31:45.959 --rc genhtml_legend=1 00:31:45.959 --rc geninfo_all_blocks=1 00:31:45.959 --rc geninfo_unexecuted_blocks=1 00:31:45.959 00:31:45.959 ' 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:45.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.959 --rc genhtml_branch_coverage=1 00:31:45.959 --rc genhtml_function_coverage=1 00:31:45.959 --rc genhtml_legend=1 00:31:45.959 --rc geninfo_all_blocks=1 00:31:45.959 --rc geninfo_unexecuted_blocks=1 00:31:45.959 00:31:45.959 ' 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.959 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:45.960 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:54.132 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:54.132 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.132 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:54.133 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:54.133 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:54.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:31:54.133 00:31:54.133 --- 10.0.0.2 ping statistics --- 00:31:54.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.133 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:31:54.133 00:31:54.133 --- 10.0.0.1 ping statistics --- 00:31:54.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.133 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=872346 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 872346 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 872346 ']' 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:54.133 13:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:54.133 [2024-11-06 13:56:16.804617] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.133 [2024-11-06 13:56:16.805765] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:31:54.133 [2024-11-06 13:56:16.805818] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.133 [2024-11-06 13:56:16.887973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.133 [2024-11-06 13:56:16.927905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.133 [2024-11-06 13:56:16.927943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.133 [2024-11-06 13:56:16.927951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.133 [2024-11-06 13:56:16.927958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.133 [2024-11-06 13:56:16.927964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.133 [2024-11-06 13:56:16.928556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.133 [2024-11-06 13:56:16.984346] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.133 [2024-11-06 13:56:16.984604] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.395 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:54.395 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:31:54.395 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.395 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:54.395 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:54.395 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.395 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:54.656 [2024-11-06 13:56:17.817071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:54.656 ************************************ 00:31:54.656 START TEST lvs_grow_clean 00:31:54.656 ************************************ 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:54.656 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:54.916 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:54.916 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:55.177 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b7352006-11c1-48a7-ad44-453ccef7cdb2 00:31:55.177 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:31:55.177 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:55.177 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:55.177 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:55.177 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b7352006-11c1-48a7-ad44-453ccef7cdb2 lvol 150 00:31:55.438 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f88bdf4c-8d84-4dce-960e-d1f8c061c61b 00:31:55.438 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:55.438 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:55.698 [2024-11-06 13:56:18.824948] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:55.698 [2024-11-06 13:56:18.825043] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:55.698 true 00:31:55.698 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:55.698 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:31:55.698 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:55.698 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:55.959 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f88bdf4c-8d84-4dce-960e-d1f8c061c61b 00:31:56.220 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:56.220 [2024-11-06 13:56:19.541346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.220 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=873052 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 873052 /var/tmp/bdevperf.sock 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 873052 ']' 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:56.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:56.479 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:56.479 [2024-11-06 13:56:19.779813] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:31:56.479 [2024-11-06 13:56:19.779889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873052 ] 00:31:56.739 [2024-11-06 13:56:19.873901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.739 [2024-11-06 13:56:19.925072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.310 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:57.310 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:31:57.310 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:57.572 Nvme0n1 00:31:57.572 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:57.834 [ 00:31:57.834 { 00:31:57.834 "name": "Nvme0n1", 00:31:57.834 "aliases": [ 00:31:57.834 "f88bdf4c-8d84-4dce-960e-d1f8c061c61b" 00:31:57.834 ], 00:31:57.834 "product_name": "NVMe disk", 00:31:57.834 "block_size": 4096, 00:31:57.834 "num_blocks": 38912, 00:31:57.834 "uuid": "f88bdf4c-8d84-4dce-960e-d1f8c061c61b", 00:31:57.834 "numa_id": 0, 00:31:57.834 "assigned_rate_limits": { 00:31:57.834 "rw_ios_per_sec": 0, 00:31:57.834 "rw_mbytes_per_sec": 0, 00:31:57.834 "r_mbytes_per_sec": 0, 00:31:57.834 "w_mbytes_per_sec": 0 00:31:57.834 }, 00:31:57.834 "claimed": false, 00:31:57.834 "zoned": false, 00:31:57.834 "supported_io_types": { 00:31:57.834 "read": true, 00:31:57.834 "write": true, 00:31:57.834 "unmap": true, 00:31:57.834 "flush": true, 00:31:57.834 "reset": true, 00:31:57.834 "nvme_admin": true, 00:31:57.834 "nvme_io": true, 00:31:57.834 "nvme_io_md": false, 00:31:57.834 "write_zeroes": true, 00:31:57.834 "zcopy": false, 00:31:57.834 "get_zone_info": false, 00:31:57.834 "zone_management": false, 00:31:57.834 "zone_append": false, 00:31:57.834 "compare": true, 00:31:57.834 "compare_and_write": true, 00:31:57.834 "abort": true, 00:31:57.834 "seek_hole": false, 00:31:57.834 "seek_data": false, 00:31:57.834 "copy": true, 00:31:57.834 "nvme_iov_md": false 00:31:57.834 }, 00:31:57.834 "memory_domains": [ 00:31:57.834 { 00:31:57.834 "dma_device_id": "system", 00:31:57.834 "dma_device_type": 1 00:31:57.834 } 00:31:57.834 ], 00:31:57.834 "driver_specific": { 00:31:57.834 "nvme": [ 00:31:57.834 { 00:31:57.834 "trid": { 00:31:57.834 "trtype": "TCP", 00:31:57.834 "adrfam": "IPv4", 00:31:57.834 "traddr": "10.0.0.2", 00:31:57.834 "trsvcid": "4420", 00:31:57.834 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:57.834 }, 00:31:57.834 "ctrlr_data": { 00:31:57.834 "cntlid": 1, 00:31:57.834 "vendor_id": "0x8086", 00:31:57.834 "model_number": "SPDK bdev Controller", 00:31:57.834 "serial_number": "SPDK0", 00:31:57.834 "firmware_revision": "25.01", 00:31:57.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:57.834 "oacs": { 00:31:57.834 "security": 0, 00:31:57.834 "format": 0, 00:31:57.834 "firmware": 0, 00:31:57.834 "ns_manage": 0 00:31:57.834 }, 00:31:57.834 "multi_ctrlr": true, 00:31:57.835 "ana_reporting": false 00:31:57.835 }, 00:31:57.835 "vs": { 00:31:57.835 "nvme_version": "1.3" 00:31:57.835 }, 00:31:57.835 "ns_data": { 00:31:57.835 "id": 1, 00:31:57.835 "can_share": true 00:31:57.835 } 00:31:57.835 } 00:31:57.835 ], 00:31:57.835 "mp_policy": "active_passive" 00:31:57.835 } 00:31:57.835 } 00:31:57.835 ] 00:31:57.835 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=873186 00:31:57.835 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:57.835 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:57.835 Running I/O for 10 seconds... 00:31:59.220 Latency(us) 00:31:59.220 [2024-11-06T12:56:22.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.220 Nvme0n1 : 1.00 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:31:59.220 [2024-11-06T12:56:22.596Z] =================================================================================================================== 00:31:59.220 [2024-11-06T12:56:22.596Z] Total : 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:31:59.220 00:31:59.790 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:32:00.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.051 Nvme0n1 : 2.00 17817.00 69.60 0.00 0.00 0.00 0.00 0.00 00:32:00.051 [2024-11-06T12:56:23.427Z] =================================================================================================================== 00:32:00.051 [2024-11-06T12:56:23.427Z] Total : 17817.00 69.60 0.00 0.00 0.00 0.00 0.00 00:32:00.051 00:32:00.051 true 00:32:00.051 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:32:00.051 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:00.051 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:00.051 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:00.051 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 873186 00:32:00.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.991 Nvme0n1 : 3.00 17884.33 69.86 0.00 0.00 0.00 0.00 0.00 00:32:00.991 [2024-11-06T12:56:24.367Z] =================================================================================================================== 00:32:00.991 [2024-11-06T12:56:24.367Z] Total : 17884.33 69.86 0.00 0.00 0.00 0.00 0.00 00:32:00.991 00:32:01.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.931 Nvme0n1 : 4.00 17921.75 70.01 0.00 0.00 0.00 0.00 0.00 00:32:01.931 [2024-11-06T12:56:25.307Z] =================================================================================================================== 00:32:01.931 [2024-11-06T12:56:25.308Z] Total : 17921.75 70.01 0.00 0.00 0.00 0.00 0.00 00:32:01.932 00:32:02.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.871 Nvme0n1 : 5.00 17944.20 70.09 0.00 0.00 0.00 0.00 0.00 00:32:02.871 [2024-11-06T12:56:26.247Z] =================================================================================================================== 00:32:02.871 [2024-11-06T12:56:26.247Z] Total : 17944.20 70.09 0.00 0.00 0.00 0.00 0.00 00:32:02.871 00:32:03.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.809 Nvme0n1 : 6.00 17980.33 70.24 0.00 0.00 0.00 0.00 0.00 00:32:03.809 [2024-11-06T12:56:27.185Z] =================================================================================================================== 00:32:03.809 [2024-11-06T12:56:27.185Z] Total : 17980.33 70.24 0.00 0.00 0.00 0.00 0.00 00:32:03.809 00:32:05.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.190 Nvme0n1 : 7.00 17988.00 70.27 0.00 0.00 0.00 0.00 0.00 00:32:05.190 [2024-11-06T12:56:28.566Z] =================================================================================================================== 00:32:05.190 [2024-11-06T12:56:28.566Z] Total : 17988.00 70.27 0.00 0.00 0.00 0.00 0.00 00:32:05.190 00:32:06.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.129 Nvme0n1 : 8.00 18009.62 70.35 0.00 0.00 0.00 0.00 0.00 00:32:06.129 [2024-11-06T12:56:29.505Z] =================================================================================================================== 00:32:06.129 [2024-11-06T12:56:29.505Z] Total : 18009.62 70.35 0.00 0.00 0.00 0.00 0.00 00:32:06.129 00:32:07.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.072 Nvme0n1 : 9.00 18019.44 70.39 0.00 0.00 0.00 0.00 0.00 00:32:07.072 [2024-11-06T12:56:30.448Z] =================================================================================================================== 00:32:07.072 [2024-11-06T12:56:30.448Z] Total : 18019.44 70.39 0.00 0.00 0.00 0.00 0.00 00:32:07.072 00:32:08.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.014 Nvme0n1 : 10.00 18027.20 70.42 0.00 0.00 0.00 0.00 0.00 00:32:08.014 [2024-11-06T12:56:31.390Z] =================================================================================================================== 00:32:08.014 [2024-11-06T12:56:31.390Z] Total : 18027.20 70.42 0.00 0.00 0.00 0.00 0.00 00:32:08.014 00:32:08.014 00:32:08.014 Latency(us) 00:32:08.014 [2024-11-06T12:56:31.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.014 Nvme0n1 : 10.00 18031.89 70.44 0.00 0.00 7096.41 2512.21 14308.69 00:32:08.014 [2024-11-06T12:56:31.390Z] =================================================================================================================== 00:32:08.014 [2024-11-06T12:56:31.390Z] Total : 18031.89 70.44 0.00 0.00 7096.41 2512.21 14308.69 00:32:08.014 { 00:32:08.014 "results": [ 00:32:08.014 { 00:32:08.014 "job": "Nvme0n1", 00:32:08.014 "core_mask": "0x2", 00:32:08.014 "workload": "randwrite", 00:32:08.014 "status": "finished", 00:32:08.014 "queue_depth": 128, 00:32:08.014 "io_size": 4096, 00:32:08.014 "runtime": 10.004495, 00:32:08.014 "iops": 18031.894663348823, 00:32:08.014 "mibps": 70.43708852870634, 00:32:08.014 "io_failed": 0, 00:32:08.014 "io_timeout": 0, 00:32:08.014 "avg_latency_us": 7096.412863858094, 00:32:08.014 "min_latency_us": 2512.213333333333, 00:32:08.014 "max_latency_us": 14308.693333333333 00:32:08.014 } 00:32:08.014 ], 00:32:08.014 "core_count": 1 00:32:08.014 } 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 873052 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 873052 ']' 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 873052 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 873052 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 873052' 00:32:08.014 killing process with pid 873052 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 873052 00:32:08.014 Received shutdown signal, test time was about 10.000000 seconds 00:32:08.014 00:32:08.014 Latency(us) 00:32:08.014 [2024-11-06T12:56:31.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.014 [2024-11-06T12:56:31.390Z] =================================================================================================================== 00:32:08.014 [2024-11-06T12:56:31.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 873052 00:32:08.014 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:08.274 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:08.534 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:32:08.534 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:08.794 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:08.794 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:08.794 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:08.794 [2024-11-06 13:56:32.161000] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:32:09.054 request: 00:32:09.054 { 00:32:09.054 "uuid": "b7352006-11c1-48a7-ad44-453ccef7cdb2", 00:32:09.054 "method": "bdev_lvol_get_lvstores", 00:32:09.054 "req_id": 1 00:32:09.054 } 00:32:09.054 Got JSON-RPC error response 00:32:09.054 response: 00:32:09.054 { 00:32:09.054 "code": -19, 00:32:09.054 "message": "No such device" 00:32:09.054 } 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:09.054 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:09.314 aio_bdev 00:32:09.314 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f88bdf4c-8d84-4dce-960e-d1f8c061c61b 00:32:09.314 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=f88bdf4c-8d84-4dce-960e-d1f8c061c61b 00:32:09.315 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:09.315 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:09.315 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:09.315 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:09.315 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:09.575 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f88bdf4c-8d84-4dce-960e-d1f8c061c61b -t 2000 00:32:09.575 [ 00:32:09.575 { 00:32:09.575 "name": "f88bdf4c-8d84-4dce-960e-d1f8c061c61b", 00:32:09.575 "aliases": [ 00:32:09.575 "lvs/lvol" 00:32:09.575 ], 00:32:09.575 "product_name": "Logical Volume", 00:32:09.575 "block_size": 4096, 00:32:09.575 "num_blocks": 38912, 00:32:09.575 "uuid": "f88bdf4c-8d84-4dce-960e-d1f8c061c61b", 00:32:09.575 "assigned_rate_limits": { 00:32:09.575 "rw_ios_per_sec": 0, 00:32:09.575 "rw_mbytes_per_sec": 0, 00:32:09.575 "r_mbytes_per_sec": 0, 00:32:09.575 "w_mbytes_per_sec": 0 00:32:09.575 }, 00:32:09.575 "claimed": false, 00:32:09.575 "zoned": false, 00:32:09.575 "supported_io_types": { 00:32:09.575 "read": true, 00:32:09.575 "write": true, 00:32:09.575 "unmap": true, 00:32:09.575 "flush": false, 00:32:09.575 "reset": true, 00:32:09.575 "nvme_admin": false, 00:32:09.575 "nvme_io": false, 00:32:09.575 "nvme_io_md": false, 00:32:09.575 "write_zeroes": true, 00:32:09.575 "zcopy": false, 00:32:09.575 "get_zone_info": false, 00:32:09.575 "zone_management": false, 00:32:09.575 "zone_append": false, 00:32:09.575 "compare": false, 00:32:09.575 "compare_and_write": false, 00:32:09.575 "abort": false, 00:32:09.575 "seek_hole": true, 00:32:09.575 "seek_data": true, 00:32:09.575 "copy": false, 00:32:09.575 "nvme_iov_md": false 00:32:09.575 }, 00:32:09.575 "driver_specific": { 00:32:09.575 "lvol": { 00:32:09.575 "lvol_store_uuid": "b7352006-11c1-48a7-ad44-453ccef7cdb2", 00:32:09.575 "base_bdev": "aio_bdev", 00:32:09.575 "thin_provision": false, 00:32:09.575 "num_allocated_clusters": 38, 00:32:09.575 "snapshot": false, 00:32:09.575 "clone": false, 00:32:09.575 "esnap_clone": false 00:32:09.575 } 00:32:09.575 } 00:32:09.575 } 00:32:09.575 ] 00:32:09.575 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:09.575 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:32:09.575 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:09.836 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:09.836 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:32:09.836 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:10.095 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:10.095 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f88bdf4c-8d84-4dce-960e-d1f8c061c61b 00:32:10.355 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b7352006-11c1-48a7-ad44-453ccef7cdb2 00:32:10.355 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:10.616 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:10.616 00:32:10.616 real 0m16.045s 00:32:10.616 user 0m15.637s 00:32:10.616 sys 0m1.488s 00:32:10.616 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:10.616 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:10.616 ************************************ 00:32:10.616 END TEST lvs_grow_clean 00:32:10.616 ************************************ 00:32:10.616 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:10.616 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:10.616 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:10.616 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:10.907 ************************************ 00:32:10.907 START TEST lvs_grow_dirty 00:32:10.907 ************************************ 00:32:10.907 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:10.907 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:10.907 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:10.907 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:10.907 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:10.907 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:10.907 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:10.908 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:10.908 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:10.908 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:10.908 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:10.908 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:11.168 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=735fb41a-470c-46e3-bf27-a042faf2e329 00:32:11.168 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:11.168 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:11.428 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:11.428 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:11.428 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 735fb41a-470c-46e3-bf27-a042faf2e329 lvol 150 00:32:11.428 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d8d9001f-828f-4ebc-a497-ec9c603896e6 00:32:11.428 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:11.428 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:11.691 [2024-11-06 13:56:34.889041] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:11.691 [2024-11-06 13:56:34.889197] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:11.691 true 00:32:11.691 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:11.691 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:11.951 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:11.951 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:11.951 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d8d9001f-828f-4ebc-a497-ec9c603896e6 00:32:12.211 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:12.211 [2024-11-06 13:56:35.573260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.471 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:12.471 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=876100 00:32:12.471 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:12.471 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:12.471 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 876100 /var/tmp/bdevperf.sock 00:32:12.471 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 876100 ']' 00:32:12.472 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:12.472 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:12.472 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:12.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:12.472 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:12.472 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:12.472 [2024-11-06 13:56:35.793418] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:32:12.472 [2024-11-06 13:56:35.793477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876100 ] 00:32:12.731 [2024-11-06 13:56:35.883497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.731 [2024-11-06 13:56:35.927840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.303 13:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:13.303 13:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:13.303 13:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:13.876 Nvme0n1 00:32:13.876 13:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:13.876 [ 00:32:13.876 { 00:32:13.876 "name": "Nvme0n1", 00:32:13.876 "aliases": [ 00:32:13.876 "d8d9001f-828f-4ebc-a497-ec9c603896e6" 00:32:13.876 ], 00:32:13.876 "product_name": "NVMe disk", 00:32:13.876 "block_size": 4096, 00:32:13.876 "num_blocks": 38912, 00:32:13.876 "uuid": "d8d9001f-828f-4ebc-a497-ec9c603896e6", 00:32:13.876 "numa_id": 0, 00:32:13.876 "assigned_rate_limits": { 00:32:13.876 "rw_ios_per_sec": 0, 00:32:13.876 "rw_mbytes_per_sec": 0, 00:32:13.876 "r_mbytes_per_sec": 0, 00:32:13.876 "w_mbytes_per_sec": 0 00:32:13.876 }, 00:32:13.876 "claimed": false, 00:32:13.876 "zoned": false, 00:32:13.876 "supported_io_types": { 00:32:13.876 "read": true, 00:32:13.876 "write": true, 00:32:13.876 "unmap": true, 00:32:13.876 "flush": true, 00:32:13.876 "reset": true, 00:32:13.876 "nvme_admin": true, 00:32:13.876 "nvme_io": true, 00:32:13.876 "nvme_io_md": false, 00:32:13.876 "write_zeroes": true, 00:32:13.876 "zcopy": false, 00:32:13.876 "get_zone_info": false, 00:32:13.876 "zone_management": false, 00:32:13.876 "zone_append": false, 00:32:13.876 "compare": true, 00:32:13.876 "compare_and_write": true, 00:32:13.876 "abort": true, 00:32:13.876 "seek_hole": false, 00:32:13.876 "seek_data": false, 00:32:13.876 "copy": true, 00:32:13.876 "nvme_iov_md": false 00:32:13.876 }, 00:32:13.876 "memory_domains": [ 00:32:13.876 { 00:32:13.876 "dma_device_id": "system", 00:32:13.876 "dma_device_type": 1 00:32:13.876 } 00:32:13.876 ], 00:32:13.876 "driver_specific": { 00:32:13.876 "nvme": [ 00:32:13.876 { 00:32:13.876 "trid": { 00:32:13.876 "trtype": "TCP", 00:32:13.876 "adrfam": "IPv4", 00:32:13.876 "traddr": "10.0.0.2", 00:32:13.876 "trsvcid": "4420", 00:32:13.876 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:13.876 }, 00:32:13.876 "ctrlr_data": { 00:32:13.876 "cntlid": 1, 00:32:13.876 "vendor_id": "0x8086", 00:32:13.876 "model_number": "SPDK bdev Controller", 00:32:13.876 "serial_number": "SPDK0", 00:32:13.876 "firmware_revision": "25.01", 00:32:13.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.876 "oacs": { 00:32:13.876 "security": 0, 00:32:13.876 "format": 0, 00:32:13.876 "firmware": 0, 00:32:13.876 "ns_manage": 0 00:32:13.876 }, 00:32:13.876 "multi_ctrlr": true, 00:32:13.876 "ana_reporting": false 00:32:13.876 }, 00:32:13.876 "vs": { 00:32:13.876 "nvme_version": "1.3" 00:32:13.876 }, 00:32:13.876 "ns_data": { 00:32:13.876 "id": 1, 00:32:13.876 "can_share": true 00:32:13.876 } 00:32:13.876 } 00:32:13.876 ], 00:32:13.876 "mp_policy": "active_passive" 00:32:13.876 } 00:32:13.876 } 00:32:13.876 ] 00:32:13.876 13:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=876294 00:32:13.876 13:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:13.876 13:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:14.137 Running I/O for 10 seconds... 00:32:15.080 Latency(us) 00:32:15.080 [2024-11-06T12:56:38.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.080 Nvme0n1 : 1.00 17679.00 69.06 0.00 0.00 0.00 0.00 0.00 00:32:15.080 [2024-11-06T12:56:38.456Z] =================================================================================================================== 00:32:15.080 [2024-11-06T12:56:38.456Z] Total : 17679.00 69.06 0.00 0.00 0.00 0.00 0.00 00:32:15.080 00:32:16.022 13:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:16.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:16.022 Nvme0n1 : 2.00 17856.50 69.75 0.00 0.00 0.00 0.00 0.00 00:32:16.022 [2024-11-06T12:56:39.398Z] =================================================================================================================== 00:32:16.022 [2024-11-06T12:56:39.398Z] Total : 17856.50 69.75 0.00 0.00 0.00 0.00 0.00 00:32:16.022 00:32:16.022 true 00:32:16.022 13:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:16.022 13:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:16.283 13:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:16.283 13:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:16.284 13:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 876294 00:32:17.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:17.224 Nvme0n1 : 3.00 17894.67 69.90 0.00 0.00 0.00 0.00 0.00 00:32:17.224 [2024-11-06T12:56:40.600Z] =================================================================================================================== 00:32:17.224 [2024-11-06T12:56:40.600Z] Total : 17894.67 69.90 0.00 0.00 0.00 0.00 0.00 00:32:17.224 00:32:18.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.166 Nvme0n1 : 4.00 17945.25 70.10 0.00 0.00 0.00 0.00 0.00 00:32:18.166 [2024-11-06T12:56:41.542Z] =================================================================================================================== 00:32:18.166 [2024-11-06T12:56:41.542Z] Total : 17945.25 70.10 0.00 0.00 0.00 0.00 0.00 00:32:18.166 00:32:19.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.106 Nvme0n1 : 5.00 17963.00 70.17 0.00 0.00 0.00 0.00 0.00 00:32:19.106 [2024-11-06T12:56:42.482Z] =================================================================================================================== 00:32:19.106 [2024-11-06T12:56:42.483Z] Total : 17963.00 70.17 0.00 0.00 0.00 0.00 0.00 00:32:19.107 00:32:20.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.048 Nvme0n1 : 6.00 17996.00 70.30 0.00 0.00 0.00 0.00 0.00 00:32:20.048 [2024-11-06T12:56:43.424Z] =================================================================================================================== 00:32:20.048 [2024-11-06T12:56:43.424Z] Total : 17996.00 70.30 0.00 0.00 0.00 0.00 0.00 00:32:20.048 00:32:21.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.131 Nvme0n1 : 7.00 18019.57 70.39 0.00 0.00 0.00 0.00 0.00 00:32:21.131 [2024-11-06T12:56:44.507Z] =================================================================================================================== 00:32:21.131 [2024-11-06T12:56:44.507Z] Total : 18019.57 70.39 0.00 0.00 0.00 0.00 0.00 00:32:21.131 00:32:22.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.187 Nvme0n1 : 8.00 18029.38 70.43 0.00 0.00 0.00 0.00 0.00 00:32:22.187 [2024-11-06T12:56:45.563Z] =================================================================================================================== 00:32:22.187 [2024-11-06T12:56:45.563Z] Total : 18029.38 70.43 0.00 0.00 0.00 0.00 0.00 00:32:22.187 00:32:23.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.129 Nvme0n1 : 9.00 18036.89 70.46 0.00 0.00 0.00 0.00 0.00 00:32:23.129 [2024-11-06T12:56:46.505Z] =================================================================================================================== 00:32:23.129 [2024-11-06T12:56:46.505Z] Total : 18036.89 70.46 0.00 0.00 0.00 0.00 0.00 00:32:23.129 00:32:24.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.072 Nvme0n1 : 10.00 18049.30 70.51 0.00 0.00 0.00 0.00 0.00 00:32:24.072 [2024-11-06T12:56:47.448Z] =================================================================================================================== 00:32:24.072 [2024-11-06T12:56:47.448Z] Total : 18049.30 70.51 0.00 0.00 0.00 0.00 0.00 00:32:24.072 00:32:24.072 00:32:24.072 Latency(us) 00:32:24.072 [2024-11-06T12:56:47.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.072 Nvme0n1 : 10.00 18053.84 70.52 0.00 0.00 7088.45 2471.25 13707.95 00:32:24.072 [2024-11-06T12:56:47.448Z] =================================================================================================================== 00:32:24.072 [2024-11-06T12:56:47.448Z] Total : 18053.84 70.52 0.00 0.00 7088.45 2471.25 13707.95 00:32:24.072 { 00:32:24.072 "results": [ 00:32:24.072 { 00:32:24.072 "job": "Nvme0n1", 00:32:24.072 "core_mask": "0x2", 00:32:24.072 "workload": "randwrite", 00:32:24.072 "status": "finished", 00:32:24.072 "queue_depth": 128, 00:32:24.072 "io_size": 4096, 00:32:24.072 "runtime": 10.004575, 00:32:24.072 "iops": 18053.840368031626, 00:32:24.072 "mibps": 70.52281393762354, 00:32:24.072 "io_failed": 0, 00:32:24.072 "io_timeout": 0, 00:32:24.072 "avg_latency_us": 7088.450724703477, 00:32:24.072 "min_latency_us": 2471.2533333333336, 00:32:24.072 "max_latency_us": 13707.946666666667 00:32:24.072 } 00:32:24.072 ], 00:32:24.072 "core_count": 1 00:32:24.072 } 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 876100 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 876100 ']' 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 876100 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 876100 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 876100' 00:32:24.072 killing process with pid 876100 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 876100 00:32:24.072 Received shutdown signal, test time was about 10.000000 seconds 00:32:24.072 00:32:24.072 Latency(us) 00:32:24.072 [2024-11-06T12:56:47.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.072 [2024-11-06T12:56:47.448Z] =================================================================================================================== 00:32:24.072 [2024-11-06T12:56:47.448Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.072 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 876100 00:32:24.334 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:24.334 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:24.595 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:24.595 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 872346 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 872346 00:32:24.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 872346 Killed "${NVMF_APP[@]}" "$@" 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=878368 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 878368 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 878368 ']' 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:24.857 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:24.857 [2024-11-06 13:56:48.148474] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:24.857 [2024-11-06 13:56:48.149638] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:32:24.857 [2024-11-06 13:56:48.149699] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.118 [2024-11-06 13:56:48.232644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.118 [2024-11-06 13:56:48.273210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.118 [2024-11-06 13:56:48.273248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.118 [2024-11-06 13:56:48.273256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.119 [2024-11-06 13:56:48.273263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.119 [2024-11-06 13:56:48.273269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.119 [2024-11-06 13:56:48.273874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.119 [2024-11-06 13:56:48.329761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:25.119 [2024-11-06 13:56:48.330022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:25.691 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:25.691 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:25.691 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.691 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:25.691 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:25.691 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.691 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:25.951 [2024-11-06 13:56:49.136418] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:25.951 [2024-11-06 13:56:49.136515] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:25.951 [2024-11-06 13:56:49.136545] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:25.951 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:25.951 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d8d9001f-828f-4ebc-a497-ec9c603896e6 00:32:25.951 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=d8d9001f-828f-4ebc-a497-ec9c603896e6 00:32:25.951 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:25.951 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:25.951 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:25.951 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:25.951 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:26.212 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d8d9001f-828f-4ebc-a497-ec9c603896e6 -t 2000 00:32:26.212 [ 00:32:26.212 { 00:32:26.212 "name": "d8d9001f-828f-4ebc-a497-ec9c603896e6", 00:32:26.212 "aliases": [ 00:32:26.212 "lvs/lvol" 00:32:26.212 ], 00:32:26.212 "product_name": "Logical Volume", 00:32:26.212 "block_size": 4096, 00:32:26.212 "num_blocks": 38912, 00:32:26.212 "uuid": "d8d9001f-828f-4ebc-a497-ec9c603896e6", 00:32:26.212 "assigned_rate_limits": { 00:32:26.212 "rw_ios_per_sec": 0, 00:32:26.212 "rw_mbytes_per_sec": 0, 00:32:26.212 "r_mbytes_per_sec": 0, 00:32:26.212 "w_mbytes_per_sec": 0 00:32:26.212 }, 00:32:26.212 "claimed": false, 00:32:26.212 "zoned": false, 00:32:26.212 "supported_io_types": { 00:32:26.212 "read": true, 00:32:26.212 "write": true, 00:32:26.212 "unmap": true, 00:32:26.212 "flush": false, 00:32:26.212 "reset": true, 00:32:26.212 "nvme_admin": false, 00:32:26.212 "nvme_io": false, 00:32:26.212 "nvme_io_md": false, 00:32:26.212 "write_zeroes": true, 00:32:26.212 "zcopy": false, 00:32:26.212 "get_zone_info": false, 00:32:26.212 "zone_management": false, 00:32:26.212 "zone_append": false, 00:32:26.212 "compare": false, 00:32:26.212 "compare_and_write": false, 00:32:26.212 "abort": false, 00:32:26.212 "seek_hole": true, 00:32:26.212 "seek_data": true, 00:32:26.212 "copy": false, 00:32:26.212 "nvme_iov_md": false 00:32:26.212 }, 00:32:26.212 "driver_specific": { 00:32:26.212 "lvol": { 00:32:26.212 "lvol_store_uuid": "735fb41a-470c-46e3-bf27-a042faf2e329", 00:32:26.212 "base_bdev": "aio_bdev", 00:32:26.212 "thin_provision": false, 00:32:26.212 "num_allocated_clusters": 38, 00:32:26.212 "snapshot": false, 00:32:26.212 "clone": false, 00:32:26.212 "esnap_clone": false 00:32:26.212 } 00:32:26.212 } 00:32:26.212 } 00:32:26.212 ] 00:32:26.212 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:26.212 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:26.212 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:26.473 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:26.473 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:26.473 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:26.473 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:26.473 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:26.734 [2024-11-06 13:56:49.982421] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:26.734 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:26.993 request: 00:32:26.993 { 00:32:26.993 "uuid": "735fb41a-470c-46e3-bf27-a042faf2e329", 00:32:26.993 "method": "bdev_lvol_get_lvstores", 00:32:26.993 "req_id": 1 00:32:26.993 } 00:32:26.993 Got JSON-RPC error response 00:32:26.993 response: 00:32:26.993 { 00:32:26.993 "code": -19, 00:32:26.993 "message": "No such device" 00:32:26.993 } 00:32:26.993 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:26.993 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:26.993 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:26.993 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:26.993 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:26.993 aio_bdev 00:32:27.253 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d8d9001f-828f-4ebc-a497-ec9c603896e6 00:32:27.253 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=d8d9001f-828f-4ebc-a497-ec9c603896e6 00:32:27.253 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:27.253 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:27.254 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:27.254 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:27.254 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:27.254 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d8d9001f-828f-4ebc-a497-ec9c603896e6 -t 2000 00:32:27.514 [ 00:32:27.514 { 00:32:27.514 "name": "d8d9001f-828f-4ebc-a497-ec9c603896e6", 00:32:27.514 "aliases": [ 00:32:27.514 "lvs/lvol" 00:32:27.514 ], 00:32:27.514 "product_name": "Logical Volume", 00:32:27.514 "block_size": 4096, 00:32:27.514 "num_blocks": 38912, 00:32:27.514 "uuid": "d8d9001f-828f-4ebc-a497-ec9c603896e6", 00:32:27.514 "assigned_rate_limits": { 00:32:27.514 "rw_ios_per_sec": 0, 00:32:27.514 "rw_mbytes_per_sec": 0, 00:32:27.514 "r_mbytes_per_sec": 0, 00:32:27.514 "w_mbytes_per_sec": 0 00:32:27.514 }, 00:32:27.514 "claimed": false, 00:32:27.514 "zoned": false, 00:32:27.514 "supported_io_types": { 00:32:27.514 "read": true, 00:32:27.514 "write": true, 00:32:27.514 "unmap": true, 00:32:27.514 "flush": false, 00:32:27.514 "reset": true, 00:32:27.514 "nvme_admin": false, 00:32:27.514 "nvme_io": false, 00:32:27.514 "nvme_io_md": false, 00:32:27.514 "write_zeroes": true, 00:32:27.514 "zcopy": false, 00:32:27.514 "get_zone_info": false, 00:32:27.514 "zone_management": false, 00:32:27.514 "zone_append": false, 00:32:27.514 "compare": false, 00:32:27.514 "compare_and_write": false, 00:32:27.514 "abort": false, 00:32:27.514 "seek_hole": true, 00:32:27.514 "seek_data": true, 00:32:27.514 "copy": false, 00:32:27.514 "nvme_iov_md": false 00:32:27.514 }, 00:32:27.514 "driver_specific": { 00:32:27.514 "lvol": { 00:32:27.514 "lvol_store_uuid": "735fb41a-470c-46e3-bf27-a042faf2e329", 00:32:27.514 "base_bdev": "aio_bdev", 00:32:27.514 "thin_provision": false, 00:32:27.514 "num_allocated_clusters": 38, 00:32:27.514 "snapshot": false, 00:32:27.514 "clone": false, 00:32:27.514 "esnap_clone": false 00:32:27.514 } 00:32:27.514 } 00:32:27.514 } 00:32:27.514 ] 00:32:27.514 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:27.514 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:27.514 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:27.514 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:27.514 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:27.514 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:27.774 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:27.774 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d8d9001f-828f-4ebc-a497-ec9c603896e6 00:32:28.035 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 735fb41a-470c-46e3-bf27-a042faf2e329 00:32:28.295 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:28.295 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:28.295 00:32:28.295 real 0m17.607s 00:32:28.295 user 0m35.590s 00:32:28.295 sys 0m2.945s 00:32:28.295 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:28.295 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:28.295 ************************************ 00:32:28.295 END TEST lvs_grow_dirty 00:32:28.295 ************************************ 00:32:28.295 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:28.295 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:28.296 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:28.296 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:28.296 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:28.296 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:28.296 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:28.296 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:28.296 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:28.555 nvmf_trace.0 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:28.555 rmmod nvme_tcp 00:32:28.555 rmmod nvme_fabrics 00:32:28.555 rmmod nvme_keyring 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 878368 ']' 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 878368 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 878368 ']' 00:32:28.555 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 878368 00:32:28.556 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:28.556 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:28.556 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 878368 00:32:28.556 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:28.556 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:28.556 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 878368' 00:32:28.556 killing process with pid 878368 00:32:28.556 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 878368 00:32:28.556 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 878368 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.816 13:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.726 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:30.726 00:32:30.726 real 0m44.982s 00:32:30.726 user 0m54.023s 00:32:30.726 sys 0m10.670s 00:32:30.726 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:30.726 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:30.726 ************************************ 00:32:30.726 END TEST nvmf_lvs_grow 00:32:30.726 ************************************ 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:30.987 ************************************ 00:32:30.987 START TEST nvmf_bdev_io_wait 00:32:30.987 ************************************ 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:30.987 * Looking for test storage... 00:32:30.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.987 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.988 --rc genhtml_branch_coverage=1 00:32:30.988 --rc genhtml_function_coverage=1 00:32:30.988 --rc genhtml_legend=1 00:32:30.988 --rc geninfo_all_blocks=1 00:32:30.988 --rc geninfo_unexecuted_blocks=1 00:32:30.988 00:32:30.988 ' 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.988 --rc genhtml_branch_coverage=1 00:32:30.988 --rc genhtml_function_coverage=1 00:32:30.988 --rc genhtml_legend=1 00:32:30.988 --rc geninfo_all_blocks=1 00:32:30.988 --rc geninfo_unexecuted_blocks=1 00:32:30.988 00:32:30.988 ' 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.988 --rc genhtml_branch_coverage=1 00:32:30.988 --rc genhtml_function_coverage=1 00:32:30.988 --rc genhtml_legend=1 00:32:30.988 --rc geninfo_all_blocks=1 00:32:30.988 --rc geninfo_unexecuted_blocks=1 00:32:30.988 00:32:30.988 ' 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.988 --rc genhtml_branch_coverage=1 00:32:30.988 --rc genhtml_function_coverage=1 00:32:30.988 --rc genhtml_legend=1 00:32:30.988 --rc geninfo_all_blocks=1 00:32:30.988 --rc geninfo_unexecuted_blocks=1 00:32:30.988 00:32:30.988 ' 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.988 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:31.249 13:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.839 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:37.840 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:37.840 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:37.840 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:37.840 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.840 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:38.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:32:38.101 00:32:38.101 --- 10.0.0.2 ping statistics --- 00:32:38.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.101 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:32:38.101 00:32:38.101 --- 10.0.0.1 ping statistics --- 00:32:38.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.101 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:38.101 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:38.360 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:38.360 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:38.360 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:38.360 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:38.360 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=883317 00:32:38.360 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 883317 00:32:38.361 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:38.361 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 883317 ']' 00:32:38.361 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.361 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:38.361 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.361 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:38.361 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:38.361 [2024-11-06 13:57:01.545490] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:38.361 [2024-11-06 13:57:01.546477] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:32:38.361 [2024-11-06 13:57:01.546517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.361 [2024-11-06 13:57:01.625142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:38.361 [2024-11-06 13:57:01.662479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.361 [2024-11-06 13:57:01.662513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.361 [2024-11-06 13:57:01.662521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.361 [2024-11-06 13:57:01.662528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.361 [2024-11-06 13:57:01.662534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.361 [2024-11-06 13:57:01.664018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.361 [2024-11-06 13:57:01.664131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.361 [2024-11-06 13:57:01.664285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.361 [2024-11-06 13:57:01.664286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:38.361 [2024-11-06 13:57:01.664538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 [2024-11-06 13:57:02.437884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:39.301 [2024-11-06 13:57:02.438244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:39.301 [2024-11-06 13:57:02.439137] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:39.301 [2024-11-06 13:57:02.439202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 [2024-11-06 13:57:02.448731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 Malloc0 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 [2024-11-06 13:57:02.512912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=883561 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=883564 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:39.301 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:39.301 { 00:32:39.301 "params": { 00:32:39.301 "name": "Nvme$subsystem", 00:32:39.302 "trtype": "$TEST_TRANSPORT", 00:32:39.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.302 "adrfam": "ipv4", 00:32:39.302 "trsvcid": "$NVMF_PORT", 00:32:39.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.302 "hdgst": ${hdgst:-false}, 00:32:39.302 "ddgst": ${ddgst:-false} 00:32:39.302 }, 00:32:39.302 "method": "bdev_nvme_attach_controller" 00:32:39.302 } 00:32:39.302 EOF 00:32:39.302 )") 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=883566 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=883570 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:39.302 { 00:32:39.302 "params": { 00:32:39.302 "name": "Nvme$subsystem", 00:32:39.302 "trtype": "$TEST_TRANSPORT", 00:32:39.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.302 "adrfam": "ipv4", 00:32:39.302 "trsvcid": "$NVMF_PORT", 00:32:39.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.302 "hdgst": ${hdgst:-false}, 00:32:39.302 "ddgst": ${ddgst:-false} 00:32:39.302 }, 00:32:39.302 "method": "bdev_nvme_attach_controller" 00:32:39.302 } 00:32:39.302 EOF 00:32:39.302 )") 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:39.302 { 00:32:39.302 "params": { 00:32:39.302 "name": "Nvme$subsystem", 00:32:39.302 "trtype": "$TEST_TRANSPORT", 00:32:39.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.302 "adrfam": "ipv4", 00:32:39.302 "trsvcid": "$NVMF_PORT", 00:32:39.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.302 "hdgst": ${hdgst:-false}, 00:32:39.302 "ddgst": ${ddgst:-false} 00:32:39.302 }, 00:32:39.302 "method": "bdev_nvme_attach_controller" 00:32:39.302 } 00:32:39.302 EOF 00:32:39.302 )") 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:39.302 { 00:32:39.302 "params": { 00:32:39.302 "name": "Nvme$subsystem", 00:32:39.302 "trtype": "$TEST_TRANSPORT", 00:32:39.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.302 "adrfam": "ipv4", 00:32:39.302 "trsvcid": "$NVMF_PORT", 00:32:39.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.302 "hdgst": ${hdgst:-false}, 00:32:39.302 "ddgst": ${ddgst:-false} 00:32:39.302 }, 00:32:39.302 "method": "bdev_nvme_attach_controller" 00:32:39.302 } 00:32:39.302 EOF 00:32:39.302 )") 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 883561 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:39.302 "params": { 00:32:39.302 "name": "Nvme1", 00:32:39.302 "trtype": "tcp", 00:32:39.302 "traddr": "10.0.0.2", 00:32:39.302 "adrfam": "ipv4", 00:32:39.302 "trsvcid": "4420", 00:32:39.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:39.302 "hdgst": false, 00:32:39.302 "ddgst": false 00:32:39.302 }, 00:32:39.302 "method": "bdev_nvme_attach_controller" 00:32:39.302 }' 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:39.302 "params": { 00:32:39.302 "name": "Nvme1", 00:32:39.302 "trtype": "tcp", 00:32:39.302 "traddr": "10.0.0.2", 00:32:39.302 "adrfam": "ipv4", 00:32:39.302 "trsvcid": "4420", 00:32:39.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:39.302 "hdgst": false, 00:32:39.302 "ddgst": false 00:32:39.302 }, 00:32:39.302 "method": "bdev_nvme_attach_controller" 00:32:39.302 }' 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:39.302 "params": { 00:32:39.302 "name": "Nvme1", 00:32:39.302 "trtype": "tcp", 00:32:39.302 "traddr": "10.0.0.2", 00:32:39.302 "adrfam": "ipv4", 00:32:39.302 "trsvcid": "4420", 00:32:39.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:39.302 "hdgst": false, 00:32:39.302 "ddgst": false 00:32:39.302 }, 00:32:39.302 "method": "bdev_nvme_attach_controller" 00:32:39.302 }' 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:39.302 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:39.302 "params": { 00:32:39.302 "name": "Nvme1", 00:32:39.302 "trtype": "tcp", 00:32:39.302 "traddr": "10.0.0.2", 00:32:39.302 "adrfam": "ipv4", 00:32:39.302 "trsvcid": "4420", 00:32:39.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:39.302 "hdgst": false, 00:32:39.302 "ddgst": false 00:32:39.302 }, 00:32:39.302 "method": "bdev_nvme_attach_controller" 00:32:39.302 }' 00:32:39.302 [2024-11-06 13:57:02.566319] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:32:39.302 [2024-11-06 13:57:02.566373] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:39.302 [2024-11-06 13:57:02.568383] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:32:39.302 [2024-11-06 13:57:02.568429] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:39.302 [2024-11-06 13:57:02.571016] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:32:39.302 [2024-11-06 13:57:02.571065] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:39.302 [2024-11-06 13:57:02.572206] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:32:39.302 [2024-11-06 13:57:02.572253] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:39.562 [2024-11-06 13:57:02.721240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.562 [2024-11-06 13:57:02.750414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:39.562 [2024-11-06 13:57:02.775890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.562 [2024-11-06 13:57:02.805405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:39.562 [2024-11-06 13:57:02.824876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.562 [2024-11-06 13:57:02.856120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:39.562 [2024-11-06 13:57:02.870795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.562 [2024-11-06 13:57:02.899171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:39.821 Running I/O for 1 seconds... 00:32:39.821 Running I/O for 1 seconds... 00:32:39.821 Running I/O for 1 seconds... 00:32:39.821 Running I/O for 1 seconds... 00:32:40.760 12757.00 IOPS, 49.83 MiB/s 00:32:40.760 Latency(us) 00:32:40.760 [2024-11-06T12:57:04.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.760 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:40.760 Nvme1n1 : 1.01 12805.04 50.02 0.00 0.00 9962.88 5024.43 12288.00 00:32:40.760 [2024-11-06T12:57:04.136Z] =================================================================================================================== 00:32:40.760 [2024-11-06T12:57:04.136Z] Total : 12805.04 50.02 0.00 0.00 9962.88 5024.43 12288.00 00:32:40.760 11888.00 IOPS, 46.44 MiB/s 00:32:40.760 Latency(us) 00:32:40.760 [2024-11-06T12:57:04.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.760 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:40.760 Nvme1n1 : 1.01 11960.17 46.72 0.00 0.00 10665.70 2157.23 14636.37 00:32:40.760 [2024-11-06T12:57:04.136Z] =================================================================================================================== 00:32:40.760 [2024-11-06T12:57:04.136Z] Total : 11960.17 46.72 0.00 0.00 10665.70 2157.23 14636.37 00:32:40.760 188520.00 IOPS, 736.41 MiB/s 00:32:40.760 Latency(us) 00:32:40.760 [2024-11-06T12:57:04.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.760 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:40.760 Nvme1n1 : 1.00 188147.94 734.95 0.00 0.00 676.35 303.79 1979.73 00:32:40.760 [2024-11-06T12:57:04.136Z] =================================================================================================================== 00:32:40.760 [2024-11-06T12:57:04.136Z] Total : 188147.94 734.95 0.00 0.00 676.35 303.79 1979.73 00:32:40.760 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 883564 00:32:40.760 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 883566 00:32:40.760 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 883570 00:32:41.021 13555.00 IOPS, 52.95 MiB/s 00:32:41.021 Latency(us) 00:32:41.021 [2024-11-06T12:57:04.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.021 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:41.021 Nvme1n1 : 1.01 13641.00 53.29 0.00 0.00 9359.39 2689.71 16056.32 00:32:41.021 [2024-11-06T12:57:04.397Z] =================================================================================================================== 00:32:41.021 [2024-11-06T12:57:04.397Z] Total : 13641.00 53.29 0.00 0.00 9359.39 2689.71 16056.32 00:32:41.021 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:41.021 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.021 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:41.021 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.021 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:41.021 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:41.022 rmmod nvme_tcp 00:32:41.022 rmmod nvme_fabrics 00:32:41.022 rmmod nvme_keyring 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 883317 ']' 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 883317 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 883317 ']' 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 883317 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:41.022 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 883317 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 883317' 00:32:41.283 killing process with pid 883317 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 883317 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 883317 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.283 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:43.826 00:32:43.826 real 0m12.489s 00:32:43.826 user 0m15.063s 00:32:43.826 sys 0m7.014s 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:43.826 ************************************ 00:32:43.826 END TEST nvmf_bdev_io_wait 00:32:43.826 ************************************ 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:43.826 ************************************ 00:32:43.826 START TEST nvmf_queue_depth 00:32:43.826 ************************************ 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:43.826 * Looking for test storage... 00:32:43.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:43.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.826 --rc genhtml_branch_coverage=1 00:32:43.826 --rc genhtml_function_coverage=1 00:32:43.826 --rc genhtml_legend=1 00:32:43.826 --rc geninfo_all_blocks=1 00:32:43.826 --rc geninfo_unexecuted_blocks=1 00:32:43.826 00:32:43.826 ' 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:43.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.826 --rc genhtml_branch_coverage=1 00:32:43.826 --rc genhtml_function_coverage=1 00:32:43.826 --rc genhtml_legend=1 00:32:43.826 --rc geninfo_all_blocks=1 00:32:43.826 --rc geninfo_unexecuted_blocks=1 00:32:43.826 00:32:43.826 ' 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:43.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.826 --rc genhtml_branch_coverage=1 00:32:43.826 --rc genhtml_function_coverage=1 00:32:43.826 --rc genhtml_legend=1 00:32:43.826 --rc geninfo_all_blocks=1 00:32:43.826 --rc geninfo_unexecuted_blocks=1 00:32:43.826 00:32:43.826 ' 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:43.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.826 --rc genhtml_branch_coverage=1 00:32:43.826 --rc genhtml_function_coverage=1 00:32:43.826 --rc genhtml_legend=1 00:32:43.826 --rc geninfo_all_blocks=1 00:32:43.826 --rc geninfo_unexecuted_blocks=1 00:32:43.826 00:32:43.826 ' 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.826 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:43.827 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:50.412 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:50.412 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:50.412 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:50.412 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.412 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.413 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:32:50.673 00:32:50.673 --- 10.0.0.2 ping statistics --- 00:32:50.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.673 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:32:50.673 00:32:50.673 --- 10.0.0.1 ping statistics --- 00:32:50.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.673 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=888517 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 888517 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 888517 ']' 00:32:50.673 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.674 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:50.674 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.674 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:50.674 13:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:50.674 [2024-11-06 13:57:13.923272] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:50.674 [2024-11-06 13:57:13.924236] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:32:50.674 [2024-11-06 13:57:13.924273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.674 [2024-11-06 13:57:14.021703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.934 [2024-11-06 13:57:14.056786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.934 [2024-11-06 13:57:14.056819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.934 [2024-11-06 13:57:14.056827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.934 [2024-11-06 13:57:14.056834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.934 [2024-11-06 13:57:14.056840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.934 [2024-11-06 13:57:14.057375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.934 [2024-11-06 13:57:14.111824] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:50.934 [2024-11-06 13:57:14.112074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:51.504 [2024-11-06 13:57:14.750136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:51.504 Malloc0 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:51.504 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:51.505 [2024-11-06 13:57:14.834108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=888615 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 888615 /var/tmp/bdevperf.sock 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 888615 ']' 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:51.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:51.505 13:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:51.765 [2024-11-06 13:57:14.888547] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:32:51.765 [2024-11-06 13:57:14.888604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888615 ] 00:32:51.765 [2024-11-06 13:57:14.962140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.765 [2024-11-06 13:57:15.002952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.336 13:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:52.336 13:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:52.336 13:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:52.336 13:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.336 13:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:52.597 NVMe0n1 00:32:52.597 13:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.597 13:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:52.597 Running I/O for 10 seconds... 00:32:54.479 8636.00 IOPS, 33.73 MiB/s [2024-11-06T12:57:19.239Z] 8996.00 IOPS, 35.14 MiB/s [2024-11-06T12:57:20.179Z] 9137.33 IOPS, 35.69 MiB/s [2024-11-06T12:57:21.120Z] 9453.25 IOPS, 36.93 MiB/s [2024-11-06T12:57:22.062Z] 9971.00 IOPS, 38.95 MiB/s [2024-11-06T12:57:23.003Z] 10360.17 IOPS, 40.47 MiB/s [2024-11-06T12:57:23.942Z] 10634.43 IOPS, 41.54 MiB/s [2024-11-06T12:57:24.881Z] 10836.25 IOPS, 42.33 MiB/s [2024-11-06T12:57:26.263Z] 11019.56 IOPS, 43.05 MiB/s [2024-11-06T12:57:26.263Z] 11167.10 IOPS, 43.62 MiB/s 00:33:02.887 Latency(us) 00:33:02.887 [2024-11-06T12:57:26.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.887 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:02.887 Verification LBA range: start 0x0 length 0x4000 00:33:02.887 NVMe0n1 : 10.06 11197.37 43.74 0.00 0.00 91126.25 24248.32 72089.60 00:33:02.887 [2024-11-06T12:57:26.263Z] =================================================================================================================== 00:33:02.887 [2024-11-06T12:57:26.263Z] Total : 11197.37 43.74 0.00 0.00 91126.25 24248.32 72089.60 00:33:02.887 { 00:33:02.887 "results": [ 00:33:02.887 { 00:33:02.887 "job": "NVMe0n1", 00:33:02.887 "core_mask": "0x1", 00:33:02.887 "workload": "verify", 00:33:02.887 "status": "finished", 00:33:02.887 "verify_range": { 00:33:02.887 "start": 0, 00:33:02.887 "length": 16384 00:33:02.887 }, 00:33:02.887 "queue_depth": 1024, 00:33:02.887 "io_size": 4096, 00:33:02.887 "runtime": 10.060044, 00:33:02.887 "iops": 11197.3665323929, 00:33:02.887 "mibps": 43.739713017159765, 00:33:02.887 "io_failed": 0, 00:33:02.887 "io_timeout": 0, 00:33:02.887 "avg_latency_us": 91126.25191319118, 00:33:02.887 "min_latency_us": 24248.32, 00:33:02.887 "max_latency_us": 72089.6 00:33:02.887 } 00:33:02.887 ], 00:33:02.887 "core_count": 1 00:33:02.887 } 00:33:02.887 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 888615 00:33:02.887 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 888615 ']' 00:33:02.887 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 888615 00:33:02.887 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:02.887 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:02.887 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 888615 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 888615' 00:33:02.887 killing process with pid 888615 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 888615 00:33:02.887 Received shutdown signal, test time was about 10.000000 seconds 00:33:02.887 00:33:02.887 Latency(us) 00:33:02.887 [2024-11-06T12:57:26.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.887 [2024-11-06T12:57:26.263Z] =================================================================================================================== 00:33:02.887 [2024-11-06T12:57:26.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 888615 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:02.887 rmmod nvme_tcp 00:33:02.887 rmmod nvme_fabrics 00:33:02.887 rmmod nvme_keyring 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 888517 ']' 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 888517 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 888517 ']' 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 888517 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 888517 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 888517' 00:33:02.887 killing process with pid 888517 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 888517 00:33:02.887 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 888517 00:33:03.147 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:03.147 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:03.147 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:03.148 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:03.148 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:03.148 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:03.148 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:03.148 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.148 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.148 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.148 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.148 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.056 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:05.056 00:33:05.056 real 0m21.720s 00:33:05.056 user 0m24.293s 00:33:05.056 sys 0m6.835s 00:33:05.056 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:05.056 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:05.056 ************************************ 00:33:05.056 END TEST nvmf_queue_depth 00:33:05.056 ************************************ 00:33:05.317 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:05.317 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:05.317 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:05.317 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:05.317 ************************************ 00:33:05.317 START TEST nvmf_target_multipath 00:33:05.317 ************************************ 00:33:05.317 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:05.317 * Looking for test storage... 00:33:05.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.317 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:05.317 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:05.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.318 --rc genhtml_branch_coverage=1 00:33:05.318 --rc genhtml_function_coverage=1 00:33:05.318 --rc genhtml_legend=1 00:33:05.318 --rc geninfo_all_blocks=1 00:33:05.318 --rc geninfo_unexecuted_blocks=1 00:33:05.318 00:33:05.318 ' 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:05.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.318 --rc genhtml_branch_coverage=1 00:33:05.318 --rc genhtml_function_coverage=1 00:33:05.318 --rc genhtml_legend=1 00:33:05.318 --rc geninfo_all_blocks=1 00:33:05.318 --rc geninfo_unexecuted_blocks=1 00:33:05.318 00:33:05.318 ' 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:05.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.318 --rc genhtml_branch_coverage=1 00:33:05.318 --rc genhtml_function_coverage=1 00:33:05.318 --rc genhtml_legend=1 00:33:05.318 --rc geninfo_all_blocks=1 00:33:05.318 --rc geninfo_unexecuted_blocks=1 00:33:05.318 00:33:05.318 ' 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:05.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.318 --rc genhtml_branch_coverage=1 00:33:05.318 --rc genhtml_function_coverage=1 00:33:05.318 --rc genhtml_legend=1 00:33:05.318 --rc geninfo_all_blocks=1 00:33:05.318 --rc geninfo_unexecuted_blocks=1 00:33:05.318 00:33:05.318 ' 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.318 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:05.579 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.580 13:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:13.716 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:13.716 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:13.716 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:13.716 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.716 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.717 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.717 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.717 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.717 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.717 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.717 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.717 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.717 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.717 13:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:33:13.717 00:33:13.717 --- 10.0.0.2 ping statistics --- 00:33:13.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.717 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:33:13.717 00:33:13.717 --- 10.0.0.1 ping statistics --- 00:33:13.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.717 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:13.717 only one NIC for nvmf test 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.717 rmmod nvme_tcp 00:33:13.717 rmmod nvme_fabrics 00:33:13.717 rmmod nvme_keyring 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.717 13:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:15.100 00:33:15.100 real 0m9.799s 00:33:15.100 user 0m2.055s 00:33:15.100 sys 0m5.685s 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:15.100 ************************************ 00:33:15.100 END TEST nvmf_target_multipath 00:33:15.100 ************************************ 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:15.100 ************************************ 00:33:15.100 START TEST nvmf_zcopy 00:33:15.100 ************************************ 00:33:15.100 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:15.100 * Looking for test storage... 00:33:15.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:15.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.361 --rc genhtml_branch_coverage=1 00:33:15.361 --rc genhtml_function_coverage=1 00:33:15.361 --rc genhtml_legend=1 00:33:15.361 --rc geninfo_all_blocks=1 00:33:15.361 --rc geninfo_unexecuted_blocks=1 00:33:15.361 00:33:15.361 ' 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:15.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.361 --rc genhtml_branch_coverage=1 00:33:15.361 --rc genhtml_function_coverage=1 00:33:15.361 --rc genhtml_legend=1 00:33:15.361 --rc geninfo_all_blocks=1 00:33:15.361 --rc geninfo_unexecuted_blocks=1 00:33:15.361 00:33:15.361 ' 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:15.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.361 --rc genhtml_branch_coverage=1 00:33:15.361 --rc genhtml_function_coverage=1 00:33:15.361 --rc genhtml_legend=1 00:33:15.361 --rc geninfo_all_blocks=1 00:33:15.361 --rc geninfo_unexecuted_blocks=1 00:33:15.361 00:33:15.361 ' 00:33:15.361 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:15.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.361 --rc genhtml_branch_coverage=1 00:33:15.361 --rc genhtml_function_coverage=1 00:33:15.361 --rc genhtml_legend=1 00:33:15.361 --rc geninfo_all_blocks=1 00:33:15.362 --rc geninfo_unexecuted_blocks=1 00:33:15.362 00:33:15.362 ' 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:15.362 13:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:23.500 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:23.500 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:23.500 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:23.500 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.500 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:33:23.501 00:33:23.501 --- 10.0.0.2 ping statistics --- 00:33:23.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.501 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:33:23.501 00:33:23.501 --- 10.0.0.1 ping statistics --- 00:33:23.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.501 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=899049 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 899049 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 899049 ']' 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:23.501 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.501 [2024-11-06 13:57:46.027175] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:23.501 [2024-11-06 13:57:46.028576] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:33:23.501 [2024-11-06 13:57:46.028646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.501 [2024-11-06 13:57:46.128301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.501 [2024-11-06 13:57:46.179110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.501 [2024-11-06 13:57:46.179164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.501 [2024-11-06 13:57:46.179172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.501 [2024-11-06 13:57:46.179180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.501 [2024-11-06 13:57:46.179186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.501 [2024-11-06 13:57:46.179988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.501 [2024-11-06 13:57:46.255598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:23.501 [2024-11-06 13:57:46.255897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.501 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.501 [2024-11-06 13:57:46.868864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.762 [2024-11-06 13:57:46.897140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.762 malloc0 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.762 { 00:33:23.762 "params": { 00:33:23.762 "name": "Nvme$subsystem", 00:33:23.762 "trtype": "$TEST_TRANSPORT", 00:33:23.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.762 "adrfam": "ipv4", 00:33:23.762 "trsvcid": "$NVMF_PORT", 00:33:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.762 "hdgst": ${hdgst:-false}, 00:33:23.762 "ddgst": ${ddgst:-false} 00:33:23.762 }, 00:33:23.762 "method": "bdev_nvme_attach_controller" 00:33:23.762 } 00:33:23.762 EOF 00:33:23.762 )") 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:23.762 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:23.762 "params": { 00:33:23.762 "name": "Nvme1", 00:33:23.762 "trtype": "tcp", 00:33:23.762 "traddr": "10.0.0.2", 00:33:23.762 "adrfam": "ipv4", 00:33:23.762 "trsvcid": "4420", 00:33:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.762 "hdgst": false, 00:33:23.762 "ddgst": false 00:33:23.762 }, 00:33:23.762 "method": "bdev_nvme_attach_controller" 00:33:23.762 }' 00:33:23.762 [2024-11-06 13:57:46.999811] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:33:23.762 [2024-11-06 13:57:46.999878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899226 ] 00:33:23.762 [2024-11-06 13:57:47.074872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.762 [2024-11-06 13:57:47.116829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.023 Running I/O for 10 seconds... 00:33:25.901 6583.00 IOPS, 51.43 MiB/s [2024-11-06T12:57:50.657Z] 6616.50 IOPS, 51.69 MiB/s [2024-11-06T12:57:51.596Z] 6640.67 IOPS, 51.88 MiB/s [2024-11-06T12:57:52.535Z] 6646.00 IOPS, 51.92 MiB/s [2024-11-06T12:57:53.475Z] 6655.60 IOPS, 52.00 MiB/s [2024-11-06T12:57:54.414Z] 7096.00 IOPS, 55.44 MiB/s [2024-11-06T12:57:55.362Z] 7459.71 IOPS, 58.28 MiB/s [2024-11-06T12:57:56.360Z] 7733.12 IOPS, 60.42 MiB/s [2024-11-06T12:57:57.307Z] 7946.33 IOPS, 62.08 MiB/s [2024-11-06T12:57:57.307Z] 8115.60 IOPS, 63.40 MiB/s 00:33:33.931 Latency(us) 00:33:33.931 [2024-11-06T12:57:57.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.931 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:33.931 Verification LBA range: start 0x0 length 0x1000 00:33:33.931 Nvme1n1 : 10.01 8119.30 63.43 0.00 0.00 15713.50 1706.67 26869.76 00:33:33.931 [2024-11-06T12:57:57.307Z] =================================================================================================================== 00:33:33.931 [2024-11-06T12:57:57.307Z] Total : 8119.30 63.43 0.00 0.00 15713.50 1706.67 26869.76 00:33:34.190 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=901234 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:34.191 { 00:33:34.191 "params": { 00:33:34.191 "name": "Nvme$subsystem", 00:33:34.191 "trtype": "$TEST_TRANSPORT", 00:33:34.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.191 "adrfam": "ipv4", 00:33:34.191 "trsvcid": "$NVMF_PORT", 00:33:34.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.191 "hdgst": ${hdgst:-false}, 00:33:34.191 "ddgst": ${ddgst:-false} 00:33:34.191 }, 00:33:34.191 "method": "bdev_nvme_attach_controller" 00:33:34.191 } 00:33:34.191 EOF 00:33:34.191 )") 00:33:34.191 [2024-11-06 13:57:57.412408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.412438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:34.191 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:34.191 "params": { 00:33:34.191 "name": "Nvme1", 00:33:34.191 "trtype": "tcp", 00:33:34.191 "traddr": "10.0.0.2", 00:33:34.191 "adrfam": "ipv4", 00:33:34.191 "trsvcid": "4420", 00:33:34.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:34.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:34.191 "hdgst": false, 00:33:34.191 "ddgst": false 00:33:34.191 }, 00:33:34.191 "method": "bdev_nvme_attach_controller" 00:33:34.191 }' 00:33:34.191 [2024-11-06 13:57:57.424375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.424384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.436374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.436382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.448374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.448382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.459137] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:33:34.191 [2024-11-06 13:57:57.459194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901234 ] 00:33:34.191 [2024-11-06 13:57:57.460374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.460383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.472374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.472383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.484374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.484382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.496374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.496382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.508374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.508382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.520374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.520381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.529472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.191 [2024-11-06 13:57:57.532374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.532381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.544374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.544384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.556374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.191 [2024-11-06 13:57:57.556384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.191 [2024-11-06 13:57:57.564491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.452 [2024-11-06 13:57:57.568374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.568383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.580380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.580393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.592378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.592390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.604375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.604385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.616375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.616389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.628374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.628381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.640382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.640401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.652432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.652443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.664378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.664389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.676376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.676386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.688377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.688386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.700373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.700381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.712373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.712381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.724374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.724384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.736373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.736381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.748374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.748381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.760373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.760382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.772373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.772381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.784373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.784381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.796373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.796380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.808374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.808382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.452 [2024-11-06 13:57:57.820381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.452 [2024-11-06 13:57:57.820396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 Running I/O for 5 seconds... 00:33:34.712 [2024-11-06 13:57:57.836430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.836446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.849182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.849199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.863471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.863487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.876337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.876353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.889320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.889335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.903757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.903773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.916972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.916988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.931202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.931218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.944739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.944758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.959083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.959098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.972152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.972167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.985731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.985751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:57.999787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:57.999803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.712 [2024-11-06 13:57:58.012882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.712 [2024-11-06 13:57:58.012897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.713 [2024-11-06 13:57:58.028183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.713 [2024-11-06 13:57:58.028198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.713 [2024-11-06 13:57:58.041356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.713 [2024-11-06 13:57:58.041372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.713 [2024-11-06 13:57:58.055455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.713 [2024-11-06 13:57:58.055470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.713 [2024-11-06 13:57:58.068446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.713 [2024-11-06 13:57:58.068463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.713 [2024-11-06 13:57:58.081158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.713 [2024-11-06 13:57:58.081174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.095623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.095639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.108944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.108960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.123405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.123421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.136480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.136496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.149518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.149534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.164087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.164104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.177276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.177294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.191557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.191573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.204446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.204462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.217579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.217595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.231890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.231906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.244435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.244450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.257315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.257331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.271831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.271846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.284475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.284491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.297264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.973 [2024-11-06 13:57:58.297279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.973 [2024-11-06 13:57:58.311311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.974 [2024-11-06 13:57:58.311326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.974 [2024-11-06 13:57:58.324615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.974 [2024-11-06 13:57:58.324631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.974 [2024-11-06 13:57:58.337724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.974 [2024-11-06 13:57:58.337739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.233 [2024-11-06 13:57:58.351477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.233 [2024-11-06 13:57:58.351493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.233 [2024-11-06 13:57:58.364704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.233 [2024-11-06 13:57:58.364719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.233 [2024-11-06 13:57:58.379494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.233 [2024-11-06 13:57:58.379510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.233 [2024-11-06 13:57:58.392356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.233 [2024-11-06 13:57:58.392372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.233 [2024-11-06 13:57:58.405182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.233 [2024-11-06 13:57:58.405198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.233 [2024-11-06 13:57:58.419969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.233 [2024-11-06 13:57:58.419986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.233 [2024-11-06 13:57:58.433135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.233 [2024-11-06 13:57:58.433150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.233 [2024-11-06 13:57:58.447710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.447726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.460958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.460974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.475825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.475842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.488847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.488863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.503434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.503449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.516360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.516376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.529532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.529548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.543331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.543347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.556714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.556728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.571388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.571404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.584270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.584286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.234 [2024-11-06 13:57:58.597395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.234 [2024-11-06 13:57:58.597410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.611730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.611759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.625118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.625134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.639620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.639636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.652843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.652858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.667345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.667360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.680407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.680422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.693453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.693468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.707591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.707606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.720136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.720151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.733412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.733427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.747961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.747977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.761087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.761102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.775198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.775214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.788321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.788338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.801594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.801609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.815245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.815261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.828347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.828363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 18921.00 IOPS, 147.82 MiB/s [2024-11-06T12:57:58.870Z] [2024-11-06 13:57:58.840400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.840416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.853332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.853348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.494 [2024-11-06 13:57:58.867582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.494 [2024-11-06 13:57:58.867601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:58.880585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:58.880600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:58.893157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:58.893172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:58.907527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:58.907542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:58.920611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:58.920626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:58.933908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:58.933923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:58.947471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:58.947486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:58.960565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:58.960580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:58.973405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:58.973420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:58.987854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:58.987869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.001329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.001345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.015563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.015579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.028390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.028406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.041212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.041228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.055316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.055331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.068074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.068089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.081002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.081016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.095004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.095019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.107685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.107700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.754 [2024-11-06 13:57:59.120935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.754 [2024-11-06 13:57:59.120952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.014 [2024-11-06 13:57:59.135398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.014 [2024-11-06 13:57:59.135413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.014 [2024-11-06 13:57:59.148570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.014 [2024-11-06 13:57:59.148586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.014 [2024-11-06 13:57:59.161624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.014 [2024-11-06 13:57:59.161640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.014 [2024-11-06 13:57:59.175816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.014 [2024-11-06 13:57:59.175832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.014 [2024-11-06 13:57:59.188664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.014 [2024-11-06 13:57:59.188679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.014 [2024-11-06 13:57:59.203879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.014 [2024-11-06 13:57:59.203895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.216932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.216946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.231967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.231982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.245052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.245066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.260037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.260053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.273335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.273350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.287887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.287902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.300959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.300974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.315599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.315616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.328686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.328701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.343744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.343763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.357010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.357026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.371591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.371606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.015 [2024-11-06 13:57:59.384770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.015 [2024-11-06 13:57:59.384789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.274 [2024-11-06 13:57:59.399379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.274 [2024-11-06 13:57:59.399395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.274 [2024-11-06 13:57:59.412196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.274 [2024-11-06 13:57:59.412211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.274 [2024-11-06 13:57:59.424964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.274 [2024-11-06 13:57:59.424978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.274 [2024-11-06 13:57:59.439557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.274 [2024-11-06 13:57:59.439573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.274 [2024-11-06 13:57:59.452686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.452700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.467178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.467193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.480066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.480080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.492901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.492916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.507600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.507615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.520564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.520579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.533585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.533600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.547212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.547227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.560359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.560374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.573198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.573212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.587444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.587460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.600707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.600722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.615414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.615429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.628711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.628725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.275 [2024-11-06 13:57:59.643647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.275 [2024-11-06 13:57:59.643662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.656679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.656695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.671215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.671230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.684095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.684110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.697016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.697031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.711648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.711664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.724959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.724973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.739194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.739210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.752223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.752238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.765199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.765213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.779953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.779969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.793421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.793436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.807677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.807692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 [2024-11-06 13:57:59.820932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.534 [2024-11-06 13:57:59.820946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.534 18988.50 IOPS, 148.35 MiB/s [2024-11-06T12:57:59.910Z] [2024-11-06 13:57:59.836142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.535 [2024-11-06 13:57:59.836158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.535 [2024-11-06 13:57:59.849534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.535 [2024-11-06 13:57:59.849550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.535 [2024-11-06 13:57:59.863770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.535 [2024-11-06 13:57:59.863786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.535 [2024-11-06 13:57:59.877115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.535 [2024-11-06 13:57:59.877130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.535 [2024-11-06 13:57:59.891244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.535 [2024-11-06 13:57:59.891260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.535 [2024-11-06 13:57:59.904523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.535 [2024-11-06 13:57:59.904539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:57:59.917942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:57:59.917958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:57:59.931845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:57:59.931860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:57:59.944743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:57:59.944763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:57:59.959532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:57:59.959548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:57:59.972865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:57:59.972879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:57:59.987467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:57:59.987483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.000345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.000361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.013664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.013681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.027660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.027676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.040854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.040870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.055728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.055753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.068686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.068702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.083648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.083665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.097079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.097095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.111864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.111880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.125117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.125133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.140032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.140048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.153319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.153340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.795 [2024-11-06 13:58:00.167880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.795 [2024-11-06 13:58:00.167897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.054 [2024-11-06 13:58:00.180959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.180974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.195569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.195585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.208867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.208883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.223419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.223434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.236577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.236593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.249451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.249466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.263452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.263468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.276548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.276564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.289159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.289174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.303887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.303904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.317010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.317025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.331318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.331333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.344489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.344504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.357326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.357342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.371449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.371465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.384983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.384999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.399577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.399593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.412524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.412544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.055 [2024-11-06 13:58:00.425753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.055 [2024-11-06 13:58:00.425769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.439807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.439823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.452631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.452647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.465391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.465406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.479772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.479788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.492803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.492818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.507159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.507175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.520291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.520306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.533597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.533612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.548301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.548317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.561083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.561099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.575523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.575539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.588484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.588500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.600883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.600899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.615377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.615393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.628276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.628292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.641758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.641774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.655840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.655857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.668931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.668950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.315 [2024-11-06 13:58:00.683586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.315 [2024-11-06 13:58:00.683601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.696659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.696674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.711879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.711894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.725213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.725229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.739495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.739510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.752783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.752798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.767649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.767665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.780901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.780916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.795575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.795590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.808712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.808727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 [2024-11-06 13:58:00.823726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.574 [2024-11-06 13:58:00.823741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.574 18970.67 IOPS, 148.21 MiB/s [2024-11-06T12:58:00.950Z] [2024-11-06 13:58:00.836798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.575 [2024-11-06 13:58:00.836812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.575 [2024-11-06 13:58:00.851228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.575 [2024-11-06 13:58:00.851243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.575 [2024-11-06 13:58:00.864448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.575 [2024-11-06 13:58:00.864463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.575 [2024-11-06 13:58:00.877787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.575 [2024-11-06 13:58:00.877803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.575 [2024-11-06 13:58:00.891733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.575 [2024-11-06 13:58:00.891753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.575 [2024-11-06 13:58:00.905054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.575 [2024-11-06 13:58:00.905069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.575 [2024-11-06 13:58:00.919880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.575 [2024-11-06 13:58:00.919896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.575 [2024-11-06 13:58:00.933249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.575 [2024-11-06 13:58:00.933264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.575 [2024-11-06 13:58:00.948288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.575 [2024-11-06 13:58:00.948303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:00.961112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:00.961128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:00.975307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:00.975322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:00.988332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:00.988348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:01.001361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:01.001375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:01.015407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:01.015422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:01.028456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:01.028472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:01.041033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:01.041048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:01.055619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:01.055634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:01.068851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:01.068865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:01.083671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:01.083687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:01.096848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:01.096863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.834 [2024-11-06 13:58:01.112098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.834 [2024-11-06 13:58:01.112113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.835 [2024-11-06 13:58:01.125245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.835 [2024-11-06 13:58:01.125260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.835 [2024-11-06 13:58:01.139215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.835 [2024-11-06 13:58:01.139230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.835 [2024-11-06 13:58:01.152307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.835 [2024-11-06 13:58:01.152322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.835 [2024-11-06 13:58:01.165036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.835 [2024-11-06 13:58:01.165050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.835 [2024-11-06 13:58:01.179507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.835 [2024-11-06 13:58:01.179523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.835 [2024-11-06 13:58:01.192961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.835 [2024-11-06 13:58:01.192975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.835 [2024-11-06 13:58:01.207555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.835 [2024-11-06 13:58:01.207570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.220695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.220711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.235471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.235486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.248839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.248854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.263801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.263817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.276972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.276987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.291334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.291350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.304612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.304627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.317366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.317381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.331444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.331459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.344701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.344716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.359791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.359806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.372920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.372935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.387318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.387333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.400415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.400431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.413029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.413044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.427123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.427138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.440143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.440158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.452952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.452967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.094 [2024-11-06 13:58:01.467148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.094 [2024-11-06 13:58:01.467163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.480363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.480378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.493531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.493546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.507310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.507326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.520595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.520610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.533619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.533634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.547782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.547797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.560911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.560926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.575946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.575962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.589032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.589047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.603400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.603416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.616505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.616520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.629429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.629444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.643724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.643739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.656867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.656882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.671162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.671179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.684078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.684094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.696899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.696914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.711677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.711693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.354 [2024-11-06 13:58:01.724855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.354 [2024-11-06 13:58:01.724871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.613 [2024-11-06 13:58:01.739552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.613 [2024-11-06 13:58:01.739569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.613 [2024-11-06 13:58:01.752677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.613 [2024-11-06 13:58:01.752692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.613 [2024-11-06 13:58:01.767656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.613 [2024-11-06 13:58:01.767672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.613 [2024-11-06 13:58:01.780670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.613 [2024-11-06 13:58:01.780686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.613 [2024-11-06 13:58:01.795567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.613 [2024-11-06 13:58:01.795584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.613 [2024-11-06 13:58:01.808632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.613 [2024-11-06 13:58:01.808647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.613 [2024-11-06 13:58:01.821966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.613 [2024-11-06 13:58:01.821983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.613 [2024-11-06 13:58:01.835523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.613 [2024-11-06 13:58:01.835539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.613 18978.50 IOPS, 148.27 MiB/s [2024-11-06T12:58:01.989Z] [2024-11-06 13:58:01.849127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.613 [2024-11-06 13:58:01.849142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.863778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.863794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.876848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.876863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.891627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.891643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.904809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.904824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.919417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.919433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.932448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.932464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.945232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.945248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.959570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.959591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.972903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.972918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.614 [2024-11-06 13:58:01.988184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.614 [2024-11-06 13:58:01.988200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.001644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.001660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.015849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.015865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.028985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.029000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.043579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.043596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.056593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.056608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.069535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.069551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.083783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.083799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.097197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.097212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.111472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.111488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.124602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.873 [2024-11-06 13:58:02.124618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.873 [2024-11-06 13:58:02.137400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.874 [2024-11-06 13:58:02.137415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.874 [2024-11-06 13:58:02.151493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.874 [2024-11-06 13:58:02.151509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.874 [2024-11-06 13:58:02.164329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.874 [2024-11-06 13:58:02.164345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.874 [2024-11-06 13:58:02.177090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.874 [2024-11-06 13:58:02.177105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.874 [2024-11-06 13:58:02.191290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.874 [2024-11-06 13:58:02.191306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.874 [2024-11-06 13:58:02.204751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.874 [2024-11-06 13:58:02.204766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.874 [2024-11-06 13:58:02.219193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.874 [2024-11-06 13:58:02.219212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.874 [2024-11-06 13:58:02.232320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.874 [2024-11-06 13:58:02.232336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.874 [2024-11-06 13:58:02.245714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.874 [2024-11-06 13:58:02.245731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.259564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.259581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.272627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.272643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.285306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.285322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.299468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.299484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.312522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.312539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.325778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.325794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.339790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.339805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.352922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.352937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.367433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.367449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.380116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.380132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.393438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.393454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.407495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.407511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.420579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.420595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.433531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.433547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.447690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.447706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.461363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.461378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.475523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.475545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.488769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.488784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.134 [2024-11-06 13:58:02.503645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.134 [2024-11-06 13:58:02.503660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.394 [2024-11-06 13:58:02.516950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.394 [2024-11-06 13:58:02.516966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.394 [2024-11-06 13:58:02.531539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.394 [2024-11-06 13:58:02.531555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.394 [2024-11-06 13:58:02.544750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.394 [2024-11-06 13:58:02.544764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.394 [2024-11-06 13:58:02.559563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.394 [2024-11-06 13:58:02.559579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.394 [2024-11-06 13:58:02.572489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.572505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.585641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.585656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.599814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.599830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.612804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.612819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.627122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.627137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.640200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.640216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.653487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.653503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.668097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.668112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.680695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.680710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.695092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.695107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.708292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.708308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.721491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.721507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.735838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.735853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.748909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.748924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.395 [2024-11-06 13:58:02.763351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.395 [2024-11-06 13:58:02.763367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.776396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.776412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.789532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.789546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.803400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.803415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.816285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.816301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.829295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.829310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 18992.40 IOPS, 148.38 MiB/s [2024-11-06T12:58:03.031Z] [2024-11-06 13:58:02.842396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.842411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 00:33:39.655 Latency(us) 00:33:39.655 [2024-11-06T12:58:03.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.655 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:39.655 Nvme1n1 : 5.01 18996.47 148.41 0.00 0.00 6731.15 2443.95 13271.04 00:33:39.655 [2024-11-06T12:58:03.031Z] =================================================================================================================== 00:33:39.655 [2024-11-06T12:58:03.031Z] Total : 18996.47 148.41 0.00 0.00 6731.15 2443.95 13271.04 00:33:39.655 [2024-11-06 13:58:02.852378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.852392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.864380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.864392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.876382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.876395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.888381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.888392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.900377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.900386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.912376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.912385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.924373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.924381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.936377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.936388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.948374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.948382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 [2024-11-06 13:58:02.960376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.655 [2024-11-06 13:58:02.960384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (901234) - No such process 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 901234 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:39.655 delay0 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.655 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:39.915 [2024-11-06 13:58:03.148932] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:46.486 Initializing NVMe Controllers 00:33:46.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:46.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:46.486 Initialization complete. Launching workers. 00:33:46.486 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1674 00:33:46.486 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1945, failed to submit 49 00:33:46.486 success 1744, unsuccessful 201, failed 0 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.486 rmmod nvme_tcp 00:33:46.486 rmmod nvme_fabrics 00:33:46.486 rmmod nvme_keyring 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 899049 ']' 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 899049 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 899049 ']' 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 899049 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:33:46.486 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 899049 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 899049' 00:33:46.487 killing process with pid 899049 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 899049 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 899049 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.487 13:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.029 00:33:49.029 real 0m33.429s 00:33:49.029 user 0m42.930s 00:33:49.029 sys 0m11.705s 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:49.029 ************************************ 00:33:49.029 END TEST nvmf_zcopy 00:33:49.029 ************************************ 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:49.029 ************************************ 00:33:49.029 START TEST nvmf_nmic 00:33:49.029 ************************************ 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:49.029 * Looking for test storage... 00:33:49.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:49.029 13:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:49.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.030 --rc genhtml_branch_coverage=1 00:33:49.030 --rc genhtml_function_coverage=1 00:33:49.030 --rc genhtml_legend=1 00:33:49.030 --rc geninfo_all_blocks=1 00:33:49.030 --rc geninfo_unexecuted_blocks=1 00:33:49.030 00:33:49.030 ' 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:49.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.030 --rc genhtml_branch_coverage=1 00:33:49.030 --rc genhtml_function_coverage=1 00:33:49.030 --rc genhtml_legend=1 00:33:49.030 --rc geninfo_all_blocks=1 00:33:49.030 --rc geninfo_unexecuted_blocks=1 00:33:49.030 00:33:49.030 ' 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:49.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.030 --rc genhtml_branch_coverage=1 00:33:49.030 --rc genhtml_function_coverage=1 00:33:49.030 --rc genhtml_legend=1 00:33:49.030 --rc geninfo_all_blocks=1 00:33:49.030 --rc geninfo_unexecuted_blocks=1 00:33:49.030 00:33:49.030 ' 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:49.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.030 --rc genhtml_branch_coverage=1 00:33:49.030 --rc genhtml_function_coverage=1 00:33:49.030 --rc genhtml_legend=1 00:33:49.030 --rc geninfo_all_blocks=1 00:33:49.030 --rc geninfo_unexecuted_blocks=1 00:33:49.030 00:33:49.030 ' 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.030 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:49.031 13:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:57.167 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:57.167 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:57.167 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:57.167 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.167 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:57.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:33:57.168 00:33:57.168 --- 10.0.0.2 ping statistics --- 00:33:57.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.168 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:33:57.168 00:33:57.168 --- 10.0.0.1 ping statistics --- 00:33:57.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.168 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=907569 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 907569 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 907569 ']' 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.168 [2024-11-06 13:58:19.440888] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:57.168 [2024-11-06 13:58:19.441843] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:33:57.168 [2024-11-06 13:58:19.441881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.168 [2024-11-06 13:58:19.521366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:57.168 [2024-11-06 13:58:19.558211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.168 [2024-11-06 13:58:19.558246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.168 [2024-11-06 13:58:19.558253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:57.168 [2024-11-06 13:58:19.558260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:57.168 [2024-11-06 13:58:19.558266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.168 [2024-11-06 13:58:19.562764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.168 [2024-11-06 13:58:19.562799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:57.168 [2024-11-06 13:58:19.562966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:57.168 [2024-11-06 13:58:19.563051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.168 [2024-11-06 13:58:19.617765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:57.168 [2024-11-06 13:58:19.617828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:57.168 [2024-11-06 13:58:19.618130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:57.168 [2024-11-06 13:58:19.618830] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:57.168 [2024-11-06 13:58:19.618859] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.168 [2024-11-06 13:58:19.691525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.168 Malloc0 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.168 [2024-11-06 13:58:19.767725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:57.168 test case1: single bdev can't be used in multiple subsystems 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.168 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.169 [2024-11-06 13:58:19.803458] bdev.c:8194:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:57.169 [2024-11-06 13:58:19.803478] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:57.169 [2024-11-06 13:58:19.803486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.169 request: 00:33:57.169 { 00:33:57.169 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:57.169 "namespace": { 00:33:57.169 "bdev_name": "Malloc0", 00:33:57.169 "no_auto_visible": false 00:33:57.169 }, 00:33:57.169 "method": "nvmf_subsystem_add_ns", 00:33:57.169 "req_id": 1 00:33:57.169 } 00:33:57.169 Got JSON-RPC error response 00:33:57.169 response: 00:33:57.169 { 00:33:57.169 "code": -32602, 00:33:57.169 "message": "Invalid parameters" 00:33:57.169 } 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:57.169 Adding namespace failed - expected result. 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:57.169 test case2: host connect to nvmf target in multiple paths 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:57.169 [2024-11-06 13:58:19.815570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.169 13:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:57.169 13:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:57.169 13:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:57.169 13:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:33:57.169 13:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:57.169 13:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:33:57.169 13:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:33:59.707 13:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:33:59.707 13:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:33:59.707 13:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:33:59.707 13:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:33:59.707 13:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:33:59.707 13:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:33:59.707 13:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:59.707 [global] 00:33:59.707 thread=1 00:33:59.707 invalidate=1 00:33:59.707 rw=write 00:33:59.707 time_based=1 00:33:59.707 runtime=1 00:33:59.707 ioengine=libaio 00:33:59.707 direct=1 00:33:59.707 bs=4096 00:33:59.707 iodepth=1 00:33:59.707 norandommap=0 00:33:59.707 numjobs=1 00:33:59.707 00:33:59.707 verify_dump=1 00:33:59.707 verify_backlog=512 00:33:59.707 verify_state_save=0 00:33:59.707 do_verify=1 00:33:59.707 verify=crc32c-intel 00:33:59.707 [job0] 00:33:59.707 filename=/dev/nvme0n1 00:33:59.707 Could not set queue depth (nvme0n1) 00:33:59.707 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.707 fio-3.35 00:33:59.707 Starting 1 thread 00:34:01.089 00:34:01.089 job0: (groupid=0, jobs=1): err= 0: pid=908437: Wed Nov 6 13:58:24 2024 00:34:01.089 read: IOPS=629, BW=2517KiB/s (2578kB/s)(2520KiB/1001msec) 00:34:01.089 slat (nsec): min=6966, max=62174, avg=23664.77, stdev=7850.71 00:34:01.089 clat (usec): min=225, max=890, avg=692.48, stdev=94.65 00:34:01.089 lat (usec): min=233, max=902, avg=716.14, stdev=97.18 00:34:01.089 clat percentiles (usec): 00:34:01.089 | 1.00th=[ 424], 5.00th=[ 523], 10.00th=[ 553], 20.00th=[ 619], 00:34:01.089 | 30.00th=[ 644], 40.00th=[ 676], 50.00th=[ 717], 60.00th=[ 742], 00:34:01.089 | 70.00th=[ 758], 80.00th=[ 775], 90.00th=[ 791], 95.00th=[ 807], 00:34:01.089 | 99.00th=[ 832], 99.50th=[ 840], 99.90th=[ 889], 99.95th=[ 889], 00:34:01.089 | 99.99th=[ 889] 00:34:01.089 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:01.089 slat (usec): min=9, max=30680, avg=61.90, stdev=957.81 00:34:01.089 clat (usec): min=186, max=701, avg=462.03, stdev=91.58 00:34:01.089 lat (usec): min=198, max=31359, avg=523.93, stdev=969.22 00:34:01.089 clat percentiles (usec): 00:34:01.089 | 1.00th=[ 247], 5.00th=[ 330], 10.00th=[ 359], 20.00th=[ 371], 00:34:01.089 | 30.00th=[ 424], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 478], 00:34:01.089 | 70.00th=[ 498], 80.00th=[ 562], 90.00th=[ 586], 95.00th=[ 611], 00:34:01.089 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 693], 99.95th=[ 701], 00:34:01.089 | 99.99th=[ 701] 00:34:01.089 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:01.089 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:01.089 lat (usec) : 250=0.91%, 500=43.59%, 750=41.41%, 1000=14.09% 00:34:01.089 cpu : usr=1.90%, sys=5.40%, ctx=1658, majf=0, minf=1 00:34:01.089 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.089 issued rwts: total=630,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.089 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:01.089 00:34:01.089 Run status group 0 (all jobs): 00:34:01.089 READ: bw=2517KiB/s (2578kB/s), 2517KiB/s-2517KiB/s (2578kB/s-2578kB/s), io=2520KiB (2580kB), run=1001-1001msec 00:34:01.089 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:34:01.089 00:34:01.089 Disk stats (read/write): 00:34:01.089 nvme0n1: ios=537/994, merge=0/0, ticks=1306/462, in_queue=1768, util=98.90% 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:01.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:01.089 rmmod nvme_tcp 00:34:01.089 rmmod nvme_fabrics 00:34:01.089 rmmod nvme_keyring 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 907569 ']' 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 907569 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 907569 ']' 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 907569 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 907569 00:34:01.089 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:01.090 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:01.090 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 907569' 00:34:01.090 killing process with pid 907569 00:34:01.090 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 907569 00:34:01.090 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 907569 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.350 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:03.893 00:34:03.893 real 0m14.805s 00:34:03.893 user 0m37.097s 00:34:03.893 sys 0m7.261s 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.893 ************************************ 00:34:03.893 END TEST nvmf_nmic 00:34:03.893 ************************************ 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:03.893 ************************************ 00:34:03.893 START TEST nvmf_fio_target 00:34:03.893 ************************************ 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:03.893 * Looking for test storage... 00:34:03.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.893 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:03.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.894 --rc genhtml_branch_coverage=1 00:34:03.894 --rc genhtml_function_coverage=1 00:34:03.894 --rc genhtml_legend=1 00:34:03.894 --rc geninfo_all_blocks=1 00:34:03.894 --rc geninfo_unexecuted_blocks=1 00:34:03.894 00:34:03.894 ' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:03.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.894 --rc genhtml_branch_coverage=1 00:34:03.894 --rc genhtml_function_coverage=1 00:34:03.894 --rc genhtml_legend=1 00:34:03.894 --rc geninfo_all_blocks=1 00:34:03.894 --rc geninfo_unexecuted_blocks=1 00:34:03.894 00:34:03.894 ' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:03.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.894 --rc genhtml_branch_coverage=1 00:34:03.894 --rc genhtml_function_coverage=1 00:34:03.894 --rc genhtml_legend=1 00:34:03.894 --rc geninfo_all_blocks=1 00:34:03.894 --rc geninfo_unexecuted_blocks=1 00:34:03.894 00:34:03.894 ' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:03.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.894 --rc genhtml_branch_coverage=1 00:34:03.894 --rc genhtml_function_coverage=1 00:34:03.894 --rc genhtml_legend=1 00:34:03.894 --rc geninfo_all_blocks=1 00:34:03.894 --rc geninfo_unexecuted_blocks=1 00:34:03.894 00:34:03.894 ' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:03.894 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.895 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.895 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.895 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:03.895 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:03.895 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:03.895 13:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:12.040 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:12.040 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:12.040 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:12.040 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:12.040 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:12.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.778 ms 00:34:12.041 00:34:12.041 --- 10.0.0.2 ping statistics --- 00:34:12.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.041 rtt min/avg/max/mdev = 0.778/0.778/0.778/0.000 ms 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:34:12.041 00:34:12.041 --- 10.0.0.1 ping statistics --- 00:34:12.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.041 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=912858 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 912858 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 912858 ']' 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:12.041 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.041 [2024-11-06 13:58:34.482459] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:12.041 [2024-11-06 13:58:34.483576] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:34:12.041 [2024-11-06 13:58:34.483627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.041 [2024-11-06 13:58:34.568013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:12.041 [2024-11-06 13:58:34.609479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.041 [2024-11-06 13:58:34.609519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.041 [2024-11-06 13:58:34.609527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.041 [2024-11-06 13:58:34.609535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.041 [2024-11-06 13:58:34.609541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.041 [2024-11-06 13:58:34.611358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.041 [2024-11-06 13:58:34.611473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:12.041 [2024-11-06 13:58:34.611629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.041 [2024-11-06 13:58:34.611630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:12.041 [2024-11-06 13:58:34.667492] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:12.041 [2024-11-06 13:58:34.667509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:12.041 [2024-11-06 13:58:34.668473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:12.041 [2024-11-06 13:58:34.669218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:12.041 [2024-11-06 13:58:34.669324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:12.041 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:12.041 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:34:12.041 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:12.041 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:12.041 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.041 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.041 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:12.306 [2024-11-06 13:58:35.476446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.306 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:12.567 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:12.567 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:12.567 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:12.567 13:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:12.827 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:12.827 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:13.087 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:13.087 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:13.347 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:13.347 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:13.347 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:13.607 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:13.607 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:13.868 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:13.868 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:13.868 13:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:14.127 13:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:14.127 13:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:14.127 13:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:14.127 13:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:14.388 13:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:14.648 [2024-11-06 13:58:37.836255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.648 13:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:14.909 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:14.909 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:15.480 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:15.480 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:34:15.480 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:15.480 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:34:15.480 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:34:15.480 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:17.394 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:17.394 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:17.394 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:17.394 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:17.394 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:17.394 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:17.394 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:17.394 [global] 00:34:17.394 thread=1 00:34:17.394 invalidate=1 00:34:17.394 rw=write 00:34:17.394 time_based=1 00:34:17.394 runtime=1 00:34:17.394 ioengine=libaio 00:34:17.394 direct=1 00:34:17.394 bs=4096 00:34:17.394 iodepth=1 00:34:17.394 norandommap=0 00:34:17.394 numjobs=1 00:34:17.394 00:34:17.394 verify_dump=1 00:34:17.394 verify_backlog=512 00:34:17.394 verify_state_save=0 00:34:17.394 do_verify=1 00:34:17.394 verify=crc32c-intel 00:34:17.394 [job0] 00:34:17.394 filename=/dev/nvme0n1 00:34:17.394 [job1] 00:34:17.394 filename=/dev/nvme0n2 00:34:17.394 [job2] 00:34:17.394 filename=/dev/nvme0n3 00:34:17.394 [job3] 00:34:17.394 filename=/dev/nvme0n4 00:34:17.685 Could not set queue depth (nvme0n1) 00:34:17.685 Could not set queue depth (nvme0n2) 00:34:17.685 Could not set queue depth (nvme0n3) 00:34:17.685 Could not set queue depth (nvme0n4) 00:34:17.948 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.948 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.948 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.948 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.948 fio-3.35 00:34:17.948 Starting 4 threads 00:34:19.354 00:34:19.354 job0: (groupid=0, jobs=1): err= 0: pid=914353: Wed Nov 6 13:58:42 2024 00:34:19.354 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:19.354 slat (nsec): min=8924, max=47571, avg=28385.37, stdev=3764.99 00:34:19.354 clat (usec): min=753, max=1277, avg=1030.35, stdev=85.69 00:34:19.354 lat (usec): min=781, max=1305, avg=1058.73, stdev=86.00 00:34:19.354 clat percentiles (usec): 00:34:19.354 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 971], 00:34:19.354 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:34:19.354 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1172], 00:34:19.354 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:34:19.354 | 99.99th=[ 1270] 00:34:19.354 write: IOPS=672, BW=2689KiB/s (2754kB/s)(2692KiB/1001msec); 0 zone resets 00:34:19.354 slat (usec): min=9, max=1615, avg=37.07, stdev=65.28 00:34:19.354 clat (usec): min=293, max=1065, avg=627.59, stdev=129.09 00:34:19.354 lat (usec): min=306, max=2461, avg=664.66, stdev=152.58 00:34:19.354 clat percentiles (usec): 00:34:19.354 | 1.00th=[ 347], 5.00th=[ 400], 10.00th=[ 461], 20.00th=[ 515], 00:34:19.354 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:34:19.354 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 840], 00:34:19.354 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1074], 99.95th=[ 1074], 00:34:19.354 | 99.99th=[ 1074] 00:34:19.354 bw ( KiB/s): min= 4096, max= 4096, per=40.54%, avg=4096.00, stdev= 0.00, samples=1 00:34:19.354 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:19.354 lat (usec) : 500=10.04%, 750=36.96%, 1000=22.70% 00:34:19.354 lat (msec) : 2=30.30% 00:34:19.354 cpu : usr=1.60%, sys=5.80%, ctx=1190, majf=0, minf=1 00:34:19.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.354 issued rwts: total=512,673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:19.354 job1: (groupid=0, jobs=1): err= 0: pid=914354: Wed Nov 6 13:58:42 2024 00:34:19.354 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:19.354 slat (nsec): min=8990, max=64578, avg=28488.96, stdev=3799.83 00:34:19.354 clat (usec): min=726, max=1497, avg=1090.13, stdev=123.33 00:34:19.354 lat (usec): min=754, max=1525, avg=1118.62, stdev=123.47 00:34:19.354 clat percentiles (usec): 00:34:19.354 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 996], 00:34:19.354 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1123], 00:34:19.354 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1254], 95.00th=[ 1287], 00:34:19.354 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1500], 99.95th=[ 1500], 00:34:19.354 | 99.99th=[ 1500] 00:34:19.354 write: IOPS=675, BW=2701KiB/s (2766kB/s)(2704KiB/1001msec); 0 zone resets 00:34:19.354 slat (nsec): min=9201, max=70213, avg=33234.72, stdev=10645.11 00:34:19.354 clat (usec): min=179, max=2005, avg=582.29, stdev=170.80 00:34:19.354 lat (usec): min=190, max=2041, avg=615.53, stdev=173.93 00:34:19.354 clat percentiles (usec): 00:34:19.354 | 1.00th=[ 239], 5.00th=[ 302], 10.00th=[ 367], 20.00th=[ 429], 00:34:19.354 | 30.00th=[ 494], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 635], 00:34:19.354 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 832], 00:34:19.354 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 2008], 99.95th=[ 2008], 00:34:19.354 | 99.99th=[ 2008] 00:34:19.354 bw ( KiB/s): min= 4096, max= 4096, per=40.54%, avg=4096.00, stdev= 0.00, samples=1 00:34:19.354 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:19.354 lat (usec) : 250=0.67%, 500=17.17%, 750=31.06%, 1000=16.50% 00:34:19.354 lat (msec) : 2=34.51%, 4=0.08% 00:34:19.354 cpu : usr=3.40%, sys=4.00%, ctx=1190, majf=0, minf=1 00:34:19.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.354 issued rwts: total=512,676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:19.354 job2: (groupid=0, jobs=1): err= 0: pid=914355: Wed Nov 6 13:58:42 2024 00:34:19.354 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:19.354 slat (nsec): min=7093, max=61700, avg=26768.42, stdev=4560.94 00:34:19.354 clat (usec): min=393, max=42102, avg=1171.70, stdev=3136.11 00:34:19.354 lat (usec): min=419, max=42128, avg=1198.47, stdev=3136.08 00:34:19.354 clat percentiles (usec): 00:34:19.354 | 1.00th=[ 537], 5.00th=[ 619], 10.00th=[ 668], 20.00th=[ 791], 00:34:19.354 | 30.00th=[ 857], 40.00th=[ 914], 50.00th=[ 963], 60.00th=[ 1004], 00:34:19.354 | 70.00th=[ 1037], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[ 1172], 00:34:19.354 | 99.00th=[ 1237], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:19.354 | 99.99th=[42206] 00:34:19.354 write: IOPS=712, BW=2849KiB/s (2918kB/s)(2852KiB/1001msec); 0 zone resets 00:34:19.354 slat (nsec): min=9932, max=55530, avg=31457.09, stdev=9971.87 00:34:19.354 clat (usec): min=122, max=1035, avg=494.21, stdev=173.19 00:34:19.354 lat (usec): min=132, max=1070, avg=525.67, stdev=176.36 00:34:19.354 clat percentiles (usec): 00:34:19.354 | 1.00th=[ 127], 5.00th=[ 231], 10.00th=[ 277], 20.00th=[ 334], 00:34:19.354 | 30.00th=[ 388], 40.00th=[ 445], 50.00th=[ 502], 60.00th=[ 537], 00:34:19.354 | 70.00th=[ 594], 80.00th=[ 652], 90.00th=[ 725], 95.00th=[ 766], 00:34:19.354 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 1037], 99.95th=[ 1037], 00:34:19.354 | 99.99th=[ 1037] 00:34:19.354 bw ( KiB/s): min= 4096, max= 4096, per=40.54%, avg=4096.00, stdev= 0.00, samples=1 00:34:19.354 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:19.354 lat (usec) : 250=3.51%, 500=25.14%, 750=32.33%, 1000=21.88% 00:34:19.354 lat (msec) : 2=16.90%, 50=0.24% 00:34:19.354 cpu : usr=2.00%, sys=3.60%, ctx=1226, majf=0, minf=1 00:34:19.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.354 issued rwts: total=512,713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:19.354 job3: (groupid=0, jobs=1): err= 0: pid=914356: Wed Nov 6 13:58:42 2024 00:34:19.354 read: IOPS=178, BW=714KiB/s (732kB/s)(728KiB/1019msec) 00:34:19.354 slat (nsec): min=7495, max=58959, avg=25589.81, stdev=5603.14 00:34:19.354 clat (usec): min=483, max=42167, avg=3820.95, stdev=10411.27 00:34:19.354 lat (usec): min=509, max=42195, avg=3846.54, stdev=10411.87 00:34:19.354 clat percentiles (usec): 00:34:19.354 | 1.00th=[ 494], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 742], 00:34:19.354 | 30.00th=[ 807], 40.00th=[ 873], 50.00th=[ 947], 60.00th=[ 988], 00:34:19.354 | 70.00th=[ 1020], 80.00th=[ 1074], 90.00th=[ 1156], 95.00th=[41681], 00:34:19.354 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:19.354 | 99.99th=[42206] 00:34:19.354 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:34:19.354 slat (nsec): min=9710, max=57386, avg=34951.60, stdev=8781.10 00:34:19.354 clat (usec): min=147, max=980, avg=571.30, stdev=143.42 00:34:19.354 lat (usec): min=183, max=1017, avg=606.25, stdev=145.41 00:34:19.354 clat percentiles (usec): 00:34:19.354 | 1.00th=[ 269], 5.00th=[ 338], 10.00th=[ 371], 20.00th=[ 445], 00:34:19.354 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 611], 00:34:19.354 | 70.00th=[ 644], 80.00th=[ 693], 90.00th=[ 758], 95.00th=[ 816], 00:34:19.354 | 99.00th=[ 889], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 979], 00:34:19.354 | 99.99th=[ 979] 00:34:19.354 bw ( KiB/s): min= 4096, max= 4096, per=40.54%, avg=4096.00, stdev= 0.00, samples=1 00:34:19.354 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:19.354 lat (usec) : 250=0.58%, 500=22.48%, 750=47.98%, 1000=19.31% 00:34:19.354 lat (msec) : 2=7.64%, 20=0.14%, 50=1.87% 00:34:19.354 cpu : usr=1.67%, sys=2.36%, ctx=696, majf=0, minf=1 00:34:19.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.354 issued rwts: total=182,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:19.354 00:34:19.354 Run status group 0 (all jobs): 00:34:19.354 READ: bw=6744KiB/s (6906kB/s), 714KiB/s-2046KiB/s (732kB/s-2095kB/s), io=6872KiB (7037kB), run=1001-1019msec 00:34:19.354 WRITE: bw=9.87MiB/s (10.3MB/s), 2010KiB/s-2849KiB/s (2058kB/s-2918kB/s), io=10.1MiB (10.5MB), run=1001-1019msec 00:34:19.354 00:34:19.354 Disk stats (read/write): 00:34:19.354 nvme0n1: ios=540/512, merge=0/0, ticks=539/248, in_queue=787, util=86.87% 00:34:19.354 nvme0n2: ios=501/512, merge=0/0, ticks=554/233, in_queue=787, util=90.91% 00:34:19.355 nvme0n3: ios=477/512, merge=0/0, ticks=1408/233, in_queue=1641, util=91.97% 00:34:19.355 nvme0n4: ios=94/512, merge=0/0, ticks=1391/210, in_queue=1601, util=94.00% 00:34:19.355 13:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:19.355 [global] 00:34:19.355 thread=1 00:34:19.355 invalidate=1 00:34:19.355 rw=randwrite 00:34:19.355 time_based=1 00:34:19.355 runtime=1 00:34:19.355 ioengine=libaio 00:34:19.355 direct=1 00:34:19.355 bs=4096 00:34:19.355 iodepth=1 00:34:19.355 norandommap=0 00:34:19.355 numjobs=1 00:34:19.355 00:34:19.355 verify_dump=1 00:34:19.355 verify_backlog=512 00:34:19.355 verify_state_save=0 00:34:19.355 do_verify=1 00:34:19.355 verify=crc32c-intel 00:34:19.355 [job0] 00:34:19.355 filename=/dev/nvme0n1 00:34:19.355 [job1] 00:34:19.355 filename=/dev/nvme0n2 00:34:19.355 [job2] 00:34:19.355 filename=/dev/nvme0n3 00:34:19.355 [job3] 00:34:19.355 filename=/dev/nvme0n4 00:34:19.355 Could not set queue depth (nvme0n1) 00:34:19.355 Could not set queue depth (nvme0n2) 00:34:19.355 Could not set queue depth (nvme0n3) 00:34:19.355 Could not set queue depth (nvme0n4) 00:34:19.614 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:19.614 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:19.614 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:19.614 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:19.614 fio-3.35 00:34:19.614 Starting 4 threads 00:34:21.030 00:34:21.030 job0: (groupid=0, jobs=1): err= 0: pid=914879: Wed Nov 6 13:58:44 2024 00:34:21.030 read: IOPS=38, BW=156KiB/s (160kB/s)(156KiB/1001msec) 00:34:21.030 slat (nsec): min=6054, max=26519, avg=14742.23, stdev=8439.10 00:34:21.030 clat (usec): min=502, max=42063, avg=21328.71, stdev=20340.97 00:34:21.030 lat (usec): min=517, max=42078, avg=21343.45, stdev=20347.55 00:34:21.030 clat percentiles (usec): 00:34:21.030 | 1.00th=[ 502], 5.00th=[ 545], 10.00th=[ 660], 20.00th=[ 668], 00:34:21.030 | 30.00th=[ 709], 40.00th=[ 906], 50.00th=[39060], 60.00th=[40633], 00:34:21.030 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:21.030 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:21.030 | 99.99th=[42206] 00:34:21.030 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:21.030 slat (nsec): min=9267, max=55431, avg=24034.35, stdev=11245.74 00:34:21.030 clat (usec): min=100, max=507, avg=298.54, stdev=72.79 00:34:21.030 lat (usec): min=109, max=526, avg=322.57, stdev=73.93 00:34:21.030 clat percentiles (usec): 00:34:21.030 | 1.00th=[ 106], 5.00th=[ 120], 10.00th=[ 206], 20.00th=[ 241], 00:34:21.030 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 318], 00:34:21.030 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 383], 95.00th=[ 408], 00:34:21.030 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 506], 99.95th=[ 506], 00:34:21.030 | 99.99th=[ 506] 00:34:21.030 bw ( KiB/s): min= 4096, max= 4096, per=37.59%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.030 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.030 lat (usec) : 250=20.15%, 500=72.60%, 750=2.72%, 1000=0.73% 00:34:21.030 lat (msec) : 2=0.18%, 50=3.63% 00:34:21.030 cpu : usr=0.70%, sys=1.20%, ctx=551, majf=0, minf=1 00:34:21.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.030 issued rwts: total=39,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.030 job1: (groupid=0, jobs=1): err= 0: pid=914881: Wed Nov 6 13:58:44 2024 00:34:21.030 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:21.030 slat (nsec): min=7525, max=60768, avg=25716.98, stdev=4495.24 00:34:21.030 clat (usec): min=527, max=41056, avg=1038.43, stdev=2506.86 00:34:21.030 lat (usec): min=553, max=41082, avg=1064.15, stdev=2506.88 00:34:21.030 clat percentiles (usec): 00:34:21.030 | 1.00th=[ 660], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 848], 00:34:21.030 | 30.00th=[ 865], 40.00th=[ 873], 50.00th=[ 889], 60.00th=[ 898], 00:34:21.030 | 70.00th=[ 914], 80.00th=[ 922], 90.00th=[ 947], 95.00th=[ 971], 00:34:21.030 | 99.00th=[ 1020], 99.50th=[ 1057], 99.90th=[41157], 99.95th=[41157], 00:34:21.030 | 99.99th=[41157] 00:34:21.030 write: IOPS=959, BW=3836KiB/s (3928kB/s)(3840KiB/1001msec); 0 zone resets 00:34:21.030 slat (nsec): min=9267, max=67844, avg=25956.03, stdev=10098.43 00:34:21.030 clat (usec): min=222, max=704, avg=437.01, stdev=69.58 00:34:21.030 lat (usec): min=232, max=732, avg=462.97, stdev=73.79 00:34:21.030 clat percentiles (usec): 00:34:21.030 | 1.00th=[ 273], 5.00th=[ 318], 10.00th=[ 334], 20.00th=[ 367], 00:34:21.030 | 30.00th=[ 404], 40.00th=[ 437], 50.00th=[ 449], 60.00th=[ 461], 00:34:21.030 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 515], 95.00th=[ 529], 00:34:21.030 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[ 701], 99.95th=[ 701], 00:34:21.030 | 99.99th=[ 701] 00:34:21.030 bw ( KiB/s): min= 4096, max= 4096, per=37.59%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.030 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.030 lat (usec) : 250=0.20%, 500=54.62%, 750=11.28%, 1000=33.29% 00:34:21.030 lat (msec) : 2=0.48%, 50=0.14% 00:34:21.030 cpu : usr=2.10%, sys=3.90%, ctx=1472, majf=0, minf=1 00:34:21.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.030 issued rwts: total=512,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.030 job2: (groupid=0, jobs=1): err= 0: pid=914885: Wed Nov 6 13:58:44 2024 00:34:21.030 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:21.030 slat (nsec): min=8384, max=61715, avg=26092.24, stdev=3125.74 00:34:21.030 clat (usec): min=752, max=1489, avg=1147.87, stdev=141.49 00:34:21.030 lat (usec): min=778, max=1515, avg=1173.96, stdev=141.52 00:34:21.030 clat percentiles (usec): 00:34:21.030 | 1.00th=[ 832], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1037], 00:34:21.030 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1172], 00:34:21.030 | 70.00th=[ 1221], 80.00th=[ 1287], 90.00th=[ 1336], 95.00th=[ 1401], 00:34:21.030 | 99.00th=[ 1467], 99.50th=[ 1483], 99.90th=[ 1483], 99.95th=[ 1483], 00:34:21.030 | 99.99th=[ 1483] 00:34:21.030 write: IOPS=576, BW=2306KiB/s (2361kB/s)(2308KiB/1001msec); 0 zone resets 00:34:21.030 slat (nsec): min=9876, max=67086, avg=30679.79, stdev=7041.99 00:34:21.030 clat (usec): min=253, max=1094, avg=645.31, stdev=136.21 00:34:21.030 lat (usec): min=263, max=1125, avg=675.99, stdev=138.03 00:34:21.030 clat percentiles (usec): 00:34:21.030 | 1.00th=[ 355], 5.00th=[ 429], 10.00th=[ 478], 20.00th=[ 529], 00:34:21.030 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 676], 00:34:21.030 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 824], 95.00th=[ 889], 00:34:21.030 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1090], 99.95th=[ 1090], 00:34:21.030 | 99.99th=[ 1090] 00:34:21.030 bw ( KiB/s): min= 4096, max= 4096, per=37.59%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.030 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.030 lat (usec) : 500=7.71%, 750=34.16%, 1000=17.36% 00:34:21.030 lat (msec) : 2=40.77% 00:34:21.030 cpu : usr=1.60%, sys=3.30%, ctx=1089, majf=0, minf=1 00:34:21.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.030 issued rwts: total=512,577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.030 job3: (groupid=0, jobs=1): err= 0: pid=914886: Wed Nov 6 13:58:44 2024 00:34:21.030 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:21.030 slat (nsec): min=8510, max=46369, avg=26374.32, stdev=2971.16 00:34:21.030 clat (usec): min=539, max=1300, avg=1052.53, stdev=113.25 00:34:21.030 lat (usec): min=564, max=1326, avg=1078.90, stdev=113.23 00:34:21.030 clat percentiles (usec): 00:34:21.030 | 1.00th=[ 676], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 979], 00:34:21.030 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1090], 00:34:21.030 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:34:21.030 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1303], 99.95th=[ 1303], 00:34:21.030 | 99.99th=[ 1303] 00:34:21.030 write: IOPS=677, BW=2709KiB/s (2774kB/s)(2712KiB/1001msec); 0 zone resets 00:34:21.030 slat (nsec): min=9394, max=64580, avg=30061.32, stdev=7797.91 00:34:21.030 clat (usec): min=162, max=919, avg=616.11, stdev=123.38 00:34:21.030 lat (usec): min=195, max=950, avg=646.17, stdev=125.54 00:34:21.030 clat percentiles (usec): 00:34:21.030 | 1.00th=[ 289], 5.00th=[ 404], 10.00th=[ 449], 20.00th=[ 515], 00:34:21.030 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 660], 00:34:21.030 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:34:21.030 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 922], 99.95th=[ 922], 00:34:21.030 | 99.99th=[ 922] 00:34:21.030 bw ( KiB/s): min= 4096, max= 4096, per=37.59%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.030 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.030 lat (usec) : 250=0.25%, 500=9.41%, 750=40.34%, 1000=17.39% 00:34:21.030 lat (msec) : 2=32.61% 00:34:21.030 cpu : usr=1.90%, sys=3.40%, ctx=1190, majf=0, minf=1 00:34:21.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.030 issued rwts: total=512,678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.030 00:34:21.030 Run status group 0 (all jobs): 00:34:21.030 READ: bw=6294KiB/s (6445kB/s), 156KiB/s-2046KiB/s (160kB/s-2095kB/s), io=6300KiB (6451kB), run=1001-1001msec 00:34:21.030 WRITE: bw=10.6MiB/s (11.2MB/s), 2046KiB/s-3836KiB/s (2095kB/s-3928kB/s), io=10.7MiB (11.2MB), run=1001-1001msec 00:34:21.030 00:34:21.030 Disk stats (read/write): 00:34:21.030 nvme0n1: ios=67/512, merge=0/0, ticks=756/152, in_queue=908, util=91.48% 00:34:21.030 nvme0n2: ios=549/587, merge=0/0, ticks=570/266, in_queue=836, util=87.63% 00:34:21.030 nvme0n3: ios=406/512, merge=0/0, ticks=455/317, in_queue=772, util=88.22% 00:34:21.030 nvme0n4: ios=456/512, merge=0/0, ticks=470/305, in_queue=775, util=89.45% 00:34:21.030 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:21.030 [global] 00:34:21.030 thread=1 00:34:21.030 invalidate=1 00:34:21.030 rw=write 00:34:21.030 time_based=1 00:34:21.030 runtime=1 00:34:21.031 ioengine=libaio 00:34:21.031 direct=1 00:34:21.031 bs=4096 00:34:21.031 iodepth=128 00:34:21.031 norandommap=0 00:34:21.031 numjobs=1 00:34:21.031 00:34:21.031 verify_dump=1 00:34:21.031 verify_backlog=512 00:34:21.031 verify_state_save=0 00:34:21.031 do_verify=1 00:34:21.031 verify=crc32c-intel 00:34:21.031 [job0] 00:34:21.031 filename=/dev/nvme0n1 00:34:21.031 [job1] 00:34:21.031 filename=/dev/nvme0n2 00:34:21.031 [job2] 00:34:21.031 filename=/dev/nvme0n3 00:34:21.031 [job3] 00:34:21.031 filename=/dev/nvme0n4 00:34:21.031 Could not set queue depth (nvme0n1) 00:34:21.031 Could not set queue depth (nvme0n2) 00:34:21.031 Could not set queue depth (nvme0n3) 00:34:21.031 Could not set queue depth (nvme0n4) 00:34:21.297 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.297 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.297 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.297 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.297 fio-3.35 00:34:21.297 Starting 4 threads 00:34:22.694 00:34:22.694 job0: (groupid=0, jobs=1): err= 0: pid=915397: Wed Nov 6 13:58:45 2024 00:34:22.694 read: IOPS=6292, BW=24.6MiB/s (25.8MB/s)(24.8MiB/1008msec) 00:34:22.694 slat (nsec): min=1031, max=21231k, avg=78481.49, stdev=704702.59 00:34:22.694 clat (usec): min=2070, max=46883, avg=9774.55, stdev=5697.12 00:34:22.694 lat (usec): min=3091, max=46891, avg=9853.03, stdev=5762.52 00:34:22.694 clat percentiles (usec): 00:34:22.694 | 1.00th=[ 4621], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6456], 00:34:22.694 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8586], 00:34:22.694 | 70.00th=[ 9634], 80.00th=[11469], 90.00th=[15139], 95.00th=[22676], 00:34:22.694 | 99.00th=[34866], 99.50th=[36963], 99.90th=[41681], 99.95th=[41681], 00:34:22.694 | 99.99th=[46924] 00:34:22.694 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:34:22.694 slat (nsec): min=1584, max=10437k, avg=67921.65, stdev=523998.91 00:34:22.694 clat (usec): min=1292, max=46862, avg=9890.83, stdev=7045.29 00:34:22.694 lat (usec): min=1303, max=46872, avg=9958.75, stdev=7081.39 00:34:22.694 clat percentiles (usec): 00:34:22.694 | 1.00th=[ 3458], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5932], 00:34:22.694 | 30.00th=[ 6259], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 8160], 00:34:22.694 | 70.00th=[10290], 80.00th=[13173], 90.00th=[16909], 95.00th=[28967], 00:34:22.694 | 99.00th=[39584], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:34:22.694 | 99.99th=[46924] 00:34:22.694 bw ( KiB/s): min=24752, max=28496, per=31.44%, avg=26624.00, stdev=2647.41, samples=2 00:34:22.694 iops : min= 6188, max= 7124, avg=6656.00, stdev=661.85, samples=2 00:34:22.694 lat (msec) : 2=0.03%, 4=1.80%, 10=68.98%, 20=22.32%, 50=6.86% 00:34:22.694 cpu : usr=5.56%, sys=7.75%, ctx=296, majf=0, minf=1 00:34:22.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:22.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:22.694 issued rwts: total=6343,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:22.694 job1: (groupid=0, jobs=1): err= 0: pid=915398: Wed Nov 6 13:58:45 2024 00:34:22.694 read: IOPS=4397, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1007msec) 00:34:22.694 slat (nsec): min=892, max=13750k, avg=100758.31, stdev=779736.10 00:34:22.694 clat (usec): min=540, max=44338, avg=12738.37, stdev=7375.11 00:34:22.694 lat (usec): min=555, max=44344, avg=12839.13, stdev=7456.79 00:34:22.694 clat percentiles (usec): 00:34:22.694 | 1.00th=[ 3818], 5.00th=[ 5342], 10.00th=[ 6063], 20.00th=[ 7308], 00:34:22.694 | 30.00th=[ 7898], 40.00th=[ 8979], 50.00th=[10159], 60.00th=[11338], 00:34:22.694 | 70.00th=[13698], 80.00th=[18220], 90.00th=[23987], 95.00th=[27657], 00:34:22.694 | 99.00th=[35914], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:34:22.694 | 99.99th=[44303] 00:34:22.694 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:34:22.694 slat (nsec): min=1531, max=14361k, avg=112325.51, stdev=793868.14 00:34:22.694 clat (usec): min=900, max=70592, avg=15418.27, stdev=13961.09 00:34:22.694 lat (usec): min=927, max=70600, avg=15530.60, stdev=14066.68 00:34:22.694 clat percentiles (usec): 00:34:22.694 | 1.00th=[ 4047], 5.00th=[ 4883], 10.00th=[ 6521], 20.00th=[ 7898], 00:34:22.694 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[11469], 00:34:22.694 | 70.00th=[14615], 80.00th=[20055], 90.00th=[27132], 95.00th=[55837], 00:34:22.694 | 99.00th=[67634], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:34:22.694 | 99.99th=[70779] 00:34:22.694 bw ( KiB/s): min=16816, max=20048, per=21.76%, avg=18432.00, stdev=2285.37, samples=2 00:34:22.694 iops : min= 4204, max= 5012, avg=4608.00, stdev=571.34, samples=2 00:34:22.694 lat (usec) : 750=0.08%, 1000=0.22% 00:34:22.694 lat (msec) : 2=0.23%, 4=0.83%, 10=50.60%, 20=27.80%, 50=17.28% 00:34:22.694 lat (msec) : 100=2.97% 00:34:22.694 cpu : usr=3.98%, sys=3.98%, ctx=317, majf=0, minf=1 00:34:22.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:22.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:22.694 issued rwts: total=4428,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:22.695 job2: (groupid=0, jobs=1): err= 0: pid=915400: Wed Nov 6 13:58:45 2024 00:34:22.695 read: IOPS=6552, BW=25.6MiB/s (26.8MB/s)(25.8MiB/1009msec) 00:34:22.695 slat (nsec): min=967, max=12820k, avg=68868.76, stdev=535404.31 00:34:22.695 clat (usec): min=1193, max=29871, avg=9779.59, stdev=3636.17 00:34:22.695 lat (usec): min=2415, max=29878, avg=9848.46, stdev=3655.79 00:34:22.695 clat percentiles (usec): 00:34:22.695 | 1.00th=[ 3163], 5.00th=[ 6194], 10.00th=[ 6587], 20.00th=[ 7308], 00:34:22.695 | 30.00th=[ 7701], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:34:22.695 | 70.00th=[10683], 80.00th=[11863], 90.00th=[13960], 95.00th=[16450], 00:34:22.695 | 99.00th=[26608], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:34:22.695 | 99.99th=[29754] 00:34:22.695 write: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec); 0 zone resets 00:34:22.695 slat (nsec): min=1620, max=14179k, avg=72257.36, stdev=542799.90 00:34:22.695 clat (usec): min=677, max=31576, avg=9518.88, stdev=5208.32 00:34:22.695 lat (usec): min=689, max=31587, avg=9591.14, stdev=5248.73 00:34:22.695 clat percentiles (usec): 00:34:22.695 | 1.00th=[ 1221], 5.00th=[ 4293], 10.00th=[ 5080], 20.00th=[ 6325], 00:34:22.695 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 8029], 60.00th=[ 8586], 00:34:22.695 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[15926], 95.00th=[22938], 00:34:22.695 | 99.00th=[28443], 99.50th=[30278], 99.90th=[31589], 99.95th=[31589], 00:34:22.695 | 99.99th=[31589] 00:34:22.695 bw ( KiB/s): min=24576, max=28672, per=31.44%, avg=26624.00, stdev=2896.31, samples=2 00:34:22.695 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:34:22.695 lat (usec) : 750=0.05%, 1000=0.25% 00:34:22.695 lat (msec) : 2=0.37%, 4=2.31%, 10=64.39%, 20=28.45%, 50=4.18% 00:34:22.695 cpu : usr=4.46%, sys=8.04%, ctx=364, majf=0, minf=2 00:34:22.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:22.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:22.695 issued rwts: total=6611,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:22.695 job3: (groupid=0, jobs=1): err= 0: pid=915402: Wed Nov 6 13:58:45 2024 00:34:22.695 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:34:22.695 slat (nsec): min=1011, max=15712k, avg=129677.43, stdev=867508.20 00:34:22.695 clat (usec): min=3886, max=62104, avg=15059.70, stdev=8756.34 00:34:22.695 lat (usec): min=3896, max=62205, avg=15189.37, stdev=8842.63 00:34:22.695 clat percentiles (usec): 00:34:22.695 | 1.00th=[ 6063], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8586], 00:34:22.695 | 30.00th=[ 9241], 40.00th=[10945], 50.00th=[13304], 60.00th=[14353], 00:34:22.695 | 70.00th=[16909], 80.00th=[19530], 90.00th=[24249], 95.00th=[30540], 00:34:22.695 | 99.00th=[54789], 99.50th=[58459], 99.90th=[62129], 99.95th=[62129], 00:34:22.695 | 99.99th=[62129] 00:34:22.695 write: IOPS=3422, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1006msec); 0 zone resets 00:34:22.695 slat (nsec): min=1730, max=14677k, avg=168075.56, stdev=828321.87 00:34:22.695 clat (usec): min=1617, max=76228, avg=23499.18, stdev=20118.14 00:34:22.695 lat (usec): min=1628, max=76237, avg=23667.26, stdev=20252.03 00:34:22.695 clat percentiles (usec): 00:34:22.695 | 1.00th=[ 4015], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 8586], 00:34:22.695 | 30.00th=[10159], 40.00th=[11994], 50.00th=[13042], 60.00th=[14091], 00:34:22.695 | 70.00th=[27919], 80.00th=[51119], 90.00th=[55837], 95.00th=[63177], 00:34:22.695 | 99.00th=[71828], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:34:22.695 | 99.99th=[76022] 00:34:22.695 bw ( KiB/s): min=12288, max=14240, per=15.66%, avg=13264.00, stdev=1380.27, samples=2 00:34:22.695 iops : min= 3072, max= 3560, avg=3316.00, stdev=345.07, samples=2 00:34:22.695 lat (msec) : 2=0.18%, 4=0.49%, 10=30.73%, 20=43.25%, 50=13.34% 00:34:22.695 lat (msec) : 100=12.00% 00:34:22.695 cpu : usr=2.49%, sys=3.78%, ctx=338, majf=0, minf=1 00:34:22.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:22.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:22.695 issued rwts: total=3072,3443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:22.695 00:34:22.695 Run status group 0 (all jobs): 00:34:22.695 READ: bw=79.2MiB/s (83.0MB/s), 11.9MiB/s-25.6MiB/s (12.5MB/s-26.8MB/s), io=79.9MiB (83.8MB), run=1006-1009msec 00:34:22.695 WRITE: bw=82.7MiB/s (86.7MB/s), 13.4MiB/s-25.8MiB/s (14.0MB/s-27.0MB/s), io=83.4MiB (87.5MB), run=1006-1009msec 00:34:22.695 00:34:22.695 Disk stats (read/write): 00:34:22.695 nvme0n1: ios=5682/6023, merge=0/0, ticks=49652/49519, in_queue=99171, util=87.27% 00:34:22.695 nvme0n2: ios=3387/3584, merge=0/0, ticks=22492/27965, in_queue=50457, util=86.99% 00:34:22.695 nvme0n3: ios=5679/5711, merge=0/0, ticks=50538/48976, in_queue=99514, util=91.19% 00:34:22.695 nvme0n4: ios=2067/2487, merge=0/0, ticks=32704/67377, in_queue=100081, util=99.57% 00:34:22.695 13:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:22.695 [global] 00:34:22.695 thread=1 00:34:22.695 invalidate=1 00:34:22.695 rw=randwrite 00:34:22.695 time_based=1 00:34:22.695 runtime=1 00:34:22.695 ioengine=libaio 00:34:22.695 direct=1 00:34:22.695 bs=4096 00:34:22.695 iodepth=128 00:34:22.695 norandommap=0 00:34:22.695 numjobs=1 00:34:22.695 00:34:22.695 verify_dump=1 00:34:22.695 verify_backlog=512 00:34:22.695 verify_state_save=0 00:34:22.695 do_verify=1 00:34:22.695 verify=crc32c-intel 00:34:22.695 [job0] 00:34:22.695 filename=/dev/nvme0n1 00:34:22.695 [job1] 00:34:22.695 filename=/dev/nvme0n2 00:34:22.695 [job2] 00:34:22.695 filename=/dev/nvme0n3 00:34:22.695 [job3] 00:34:22.695 filename=/dev/nvme0n4 00:34:22.695 Could not set queue depth (nvme0n1) 00:34:22.695 Could not set queue depth (nvme0n2) 00:34:22.695 Could not set queue depth (nvme0n3) 00:34:22.695 Could not set queue depth (nvme0n4) 00:34:22.955 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:22.955 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:22.955 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:22.955 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:22.955 fio-3.35 00:34:22.955 Starting 4 threads 00:34:24.338 00:34:24.338 job0: (groupid=0, jobs=1): err= 0: pid=915927: Wed Nov 6 13:58:47 2024 00:34:24.338 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:34:24.338 slat (nsec): min=1356, max=7780.8k, avg=107186.37, stdev=648412.24 00:34:24.338 clat (usec): min=4424, max=37907, avg=13813.21, stdev=5818.13 00:34:24.339 lat (usec): min=4429, max=42815, avg=13920.40, stdev=5858.21 00:34:24.339 clat percentiles (usec): 00:34:24.339 | 1.00th=[ 6980], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 9372], 00:34:24.339 | 30.00th=[10421], 40.00th=[11469], 50.00th=[12256], 60.00th=[13435], 00:34:24.339 | 70.00th=[13960], 80.00th=[16450], 90.00th=[23462], 95.00th=[27395], 00:34:24.339 | 99.00th=[32637], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:34:24.339 | 99.99th=[38011] 00:34:24.339 write: IOPS=4373, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1006msec); 0 zone resets 00:34:24.339 slat (usec): min=2, max=9113, avg=123.33, stdev=661.67 00:34:24.339 clat (usec): min=2114, max=50788, avg=16114.94, stdev=9910.14 00:34:24.339 lat (usec): min=5203, max=50792, avg=16238.27, stdev=9977.91 00:34:24.339 clat percentiles (usec): 00:34:24.339 | 1.00th=[ 6980], 5.00th=[ 7767], 10.00th=[ 8356], 20.00th=[ 8848], 00:34:24.339 | 30.00th=[ 9634], 40.00th=[11076], 50.00th=[11731], 60.00th=[13435], 00:34:24.339 | 70.00th=[17957], 80.00th=[23200], 90.00th=[32900], 95.00th=[39584], 00:34:24.339 | 99.00th=[45351], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:34:24.339 | 99.99th=[50594] 00:34:24.339 bw ( KiB/s): min=10880, max=23296, per=21.09%, avg=17088.00, stdev=8779.44, samples=2 00:34:24.339 iops : min= 2720, max= 5824, avg=4272.00, stdev=2194.86, samples=2 00:34:24.339 lat (msec) : 4=0.01%, 10=28.80%, 20=51.84%, 50=19.27%, 100=0.08% 00:34:24.339 cpu : usr=2.59%, sys=3.98%, ctx=391, majf=0, minf=1 00:34:24.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:24.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:24.339 issued rwts: total=4096,4400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:24.339 job1: (groupid=0, jobs=1): err= 0: pid=915928: Wed Nov 6 13:58:47 2024 00:34:24.339 read: IOPS=6220, BW=24.3MiB/s (25.5MB/s)(25.4MiB/1047msec) 00:34:24.339 slat (nsec): min=901, max=7728.2k, avg=76988.35, stdev=488525.60 00:34:24.339 clat (usec): min=1957, max=56620, avg=10712.44, stdev=7163.29 00:34:24.339 lat (usec): min=1960, max=61284, avg=10789.43, stdev=7191.58 00:34:24.339 clat percentiles (usec): 00:34:24.339 | 1.00th=[ 4686], 5.00th=[ 5407], 10.00th=[ 5800], 20.00th=[ 6587], 00:34:24.339 | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[ 8356], 60.00th=[ 9372], 00:34:24.339 | 70.00th=[11863], 80.00th=[14091], 90.00th=[17171], 95.00th=[19268], 00:34:24.339 | 99.00th=[51643], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:34:24.339 | 99.99th=[56361] 00:34:24.339 write: IOPS=6357, BW=24.8MiB/s (26.0MB/s)(26.0MiB/1047msec); 0 zone resets 00:34:24.339 slat (nsec): min=1542, max=6049.2k, avg=70662.49, stdev=419839.33 00:34:24.339 clat (usec): min=3254, max=33734, avg=9434.09, stdev=4317.67 00:34:24.339 lat (usec): min=3258, max=33737, avg=9504.75, stdev=4353.09 00:34:24.339 clat percentiles (usec): 00:34:24.339 | 1.00th=[ 4080], 5.00th=[ 5342], 10.00th=[ 6325], 20.00th=[ 6718], 00:34:24.339 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8848], 00:34:24.339 | 70.00th=[10290], 80.00th=[11994], 90.00th=[13698], 95.00th=[16450], 00:34:24.339 | 99.00th=[30278], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:34:24.339 | 99.99th=[33817] 00:34:24.339 bw ( KiB/s): min=25816, max=27432, per=32.86%, avg=26624.00, stdev=1142.68, samples=2 00:34:24.339 iops : min= 6454, max= 6858, avg=6656.00, stdev=285.67, samples=2 00:34:24.339 lat (msec) : 2=0.06%, 4=0.41%, 10=64.28%, 20=31.96%, 50=2.65% 00:34:24.339 lat (msec) : 100=0.64% 00:34:24.339 cpu : usr=3.06%, sys=7.07%, ctx=495, majf=0, minf=2 00:34:24.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:24.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:24.339 issued rwts: total=6513,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:24.339 job2: (groupid=0, jobs=1): err= 0: pid=915929: Wed Nov 6 13:58:47 2024 00:34:24.339 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:34:24.339 slat (nsec): min=970, max=14479k, avg=103977.33, stdev=687889.58 00:34:24.339 clat (usec): min=5162, max=57465, avg=13662.64, stdev=5943.28 00:34:24.339 lat (usec): min=5167, max=57468, avg=13766.61, stdev=5994.08 00:34:24.339 clat percentiles (usec): 00:34:24.339 | 1.00th=[ 5342], 5.00th=[ 6587], 10.00th=[ 7046], 20.00th=[ 8225], 00:34:24.339 | 30.00th=[ 9241], 40.00th=[11338], 50.00th=[13173], 60.00th=[15139], 00:34:24.339 | 70.00th=[16909], 80.00th=[18220], 90.00th=[20841], 95.00th=[24249], 00:34:24.339 | 99.00th=[27132], 99.50th=[30540], 99.90th=[52691], 99.95th=[52691], 00:34:24.339 | 99.99th=[57410] 00:34:24.339 write: IOPS=5007, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1005msec); 0 zone resets 00:34:24.339 slat (nsec): min=1601, max=7351.6k, avg=98606.00, stdev=558774.89 00:34:24.339 clat (usec): min=1281, max=30567, avg=12767.72, stdev=5716.24 00:34:24.339 lat (usec): min=1288, max=30576, avg=12866.33, stdev=5767.22 00:34:24.339 clat percentiles (usec): 00:34:24.339 | 1.00th=[ 3785], 5.00th=[ 5473], 10.00th=[ 6783], 20.00th=[ 7767], 00:34:24.339 | 30.00th=[ 8160], 40.00th=[10028], 50.00th=[11994], 60.00th=[14222], 00:34:24.339 | 70.00th=[15664], 80.00th=[16909], 90.00th=[19006], 95.00th=[25035], 00:34:24.339 | 99.00th=[29492], 99.50th=[30016], 99.90th=[30540], 99.95th=[30540], 00:34:24.339 | 99.99th=[30540] 00:34:24.339 bw ( KiB/s): min=17384, max=21864, per=24.22%, avg=19624.00, stdev=3167.84, samples=2 00:34:24.339 iops : min= 4346, max= 5466, avg=4906.00, stdev=791.96, samples=2 00:34:24.339 lat (msec) : 2=0.11%, 4=0.41%, 10=37.22%, 20=51.98%, 50=10.10% 00:34:24.339 lat (msec) : 100=0.18% 00:34:24.339 cpu : usr=3.09%, sys=5.18%, ctx=454, majf=0, minf=2 00:34:24.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:24.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:24.339 issued rwts: total=4608,5033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:24.339 job3: (groupid=0, jobs=1): err= 0: pid=915930: Wed Nov 6 13:58:47 2024 00:34:24.339 read: IOPS=4634, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1003msec) 00:34:24.339 slat (nsec): min=958, max=7513.4k, avg=95975.58, stdev=606636.37 00:34:24.339 clat (usec): min=1160, max=44296, avg=12877.22, stdev=4946.92 00:34:24.339 lat (usec): min=2383, max=45870, avg=12973.20, stdev=4971.93 00:34:24.339 clat percentiles (usec): 00:34:24.339 | 1.00th=[ 4555], 5.00th=[ 6456], 10.00th=[ 7898], 20.00th=[ 8979], 00:34:24.339 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[12256], 60.00th=[13698], 00:34:24.339 | 70.00th=[15008], 80.00th=[16319], 90.00th=[20055], 95.00th=[22676], 00:34:24.339 | 99.00th=[23987], 99.50th=[23987], 99.90th=[42730], 99.95th=[42730], 00:34:24.339 | 99.99th=[44303] 00:34:24.339 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:34:24.339 slat (nsec): min=1553, max=6225.2k, avg=93882.73, stdev=527700.81 00:34:24.339 clat (usec): min=430, max=58829, avg=13075.30, stdev=7146.62 00:34:24.339 lat (usec): min=653, max=58834, avg=13169.18, stdev=7167.14 00:34:24.339 clat percentiles (usec): 00:34:24.339 | 1.00th=[ 1532], 5.00th=[ 5080], 10.00th=[ 6456], 20.00th=[ 8848], 00:34:24.339 | 30.00th=[ 9634], 40.00th=[11076], 50.00th=[12256], 60.00th=[13698], 00:34:24.339 | 70.00th=[14091], 80.00th=[15008], 90.00th=[18220], 95.00th=[26870], 00:34:24.339 | 99.00th=[46924], 99.50th=[49021], 99.90th=[58983], 99.95th=[58983], 00:34:24.339 | 99.99th=[58983] 00:34:24.339 bw ( KiB/s): min=18336, max=21920, per=24.84%, avg=20128.00, stdev=2534.27, samples=2 00:34:24.339 iops : min= 4584, max= 5480, avg=5032.00, stdev=633.57, samples=2 00:34:24.339 lat (usec) : 500=0.01%, 750=0.12%, 1000=0.15% 00:34:24.339 lat (msec) : 2=0.67%, 4=1.32%, 10=31.65%, 20=56.30%, 50=9.56% 00:34:24.339 lat (msec) : 100=0.21% 00:34:24.339 cpu : usr=2.40%, sys=5.39%, ctx=384, majf=0, minf=1 00:34:24.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:24.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:24.339 issued rwts: total=4648,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:24.339 00:34:24.339 Run status group 0 (all jobs): 00:34:24.339 READ: bw=74.1MiB/s (77.7MB/s), 15.9MiB/s-24.3MiB/s (16.7MB/s-25.5MB/s), io=77.6MiB (81.4MB), run=1003-1047msec 00:34:24.339 WRITE: bw=79.1MiB/s (83.0MB/s), 17.1MiB/s-24.8MiB/s (17.9MB/s-26.0MB/s), io=82.8MiB (86.9MB), run=1003-1047msec 00:34:24.339 00:34:24.339 Disk stats (read/write): 00:34:24.339 nvme0n1: ios=3197/3584, merge=0/0, ticks=17880/19760, in_queue=37640, util=99.80% 00:34:24.339 nvme0n2: ios=6537/6656, merge=0/0, ticks=25522/22488, in_queue=48010, util=86.45% 00:34:24.339 nvme0n3: ios=3072/3442, merge=0/0, ticks=17320/17092, in_queue=34412, util=86.26% 00:34:24.339 nvme0n4: ios=3362/3599, merge=0/0, ticks=17737/24215, in_queue=41952, util=88.98% 00:34:24.339 13:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:24.339 13:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:24.339 13:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=916201 00:34:24.339 13:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:24.339 [global] 00:34:24.339 thread=1 00:34:24.339 invalidate=1 00:34:24.339 rw=read 00:34:24.339 time_based=1 00:34:24.339 runtime=10 00:34:24.339 ioengine=libaio 00:34:24.339 direct=1 00:34:24.339 bs=4096 00:34:24.339 iodepth=1 00:34:24.339 norandommap=1 00:34:24.339 numjobs=1 00:34:24.339 00:34:24.339 [job0] 00:34:24.339 filename=/dev/nvme0n1 00:34:24.339 [job1] 00:34:24.339 filename=/dev/nvme0n2 00:34:24.339 [job2] 00:34:24.339 filename=/dev/nvme0n3 00:34:24.339 [job3] 00:34:24.339 filename=/dev/nvme0n4 00:34:24.339 Could not set queue depth (nvme0n1) 00:34:24.339 Could not set queue depth (nvme0n2) 00:34:24.339 Could not set queue depth (nvme0n3) 00:34:24.339 Could not set queue depth (nvme0n4) 00:34:24.600 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:24.600 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:24.600 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:24.600 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:24.600 fio-3.35 00:34:24.600 Starting 4 threads 00:34:27.253 13:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:27.514 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1396736, buflen=4096 00:34:27.514 fio: pid=916453, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:27.514 13:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:27.514 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10149888, buflen=4096 00:34:27.514 fio: pid=916452, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:27.514 13:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:27.514 13:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:27.775 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:27.775 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:27.775 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=405504, buflen=4096 00:34:27.775 fio: pid=916448, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:28.036 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.036 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:28.036 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9129984, buflen=4096 00:34:28.036 fio: pid=916451, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:28.036 00:34:28.036 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=916448: Wed Nov 6 13:58:51 2024 00:34:28.036 read: IOPS=33, BW=133KiB/s (136kB/s)(396KiB/2983msec) 00:34:28.036 slat (usec): min=5, max=30601, avg=563.05, stdev=3846.50 00:34:28.036 clat (usec): min=558, max=42132, avg=29334.12, stdev=18987.96 00:34:28.036 lat (usec): min=565, max=71976, avg=29902.61, stdev=19723.28 00:34:28.036 clat percentiles (usec): 00:34:28.036 | 1.00th=[ 562], 5.00th=[ 619], 10.00th=[ 627], 20.00th=[ 701], 00:34:28.036 | 30.00th=[ 906], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:34:28.036 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:28.036 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.036 | 99.99th=[42206] 00:34:28.036 bw ( KiB/s): min= 95, max= 328, per=2.18%, avg=142.20, stdev=103.87, samples=5 00:34:28.036 iops : min= 23, max= 82, avg=35.40, stdev=26.05, samples=5 00:34:28.036 lat (usec) : 750=24.00%, 1000=6.00% 00:34:28.036 lat (msec) : 50=69.00% 00:34:28.036 cpu : usr=0.13%, sys=0.00%, ctx=103, majf=0, minf=1 00:34:28.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.036 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.036 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.036 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=916451: Wed Nov 6 13:58:51 2024 00:34:28.036 read: IOPS=704, BW=2818KiB/s (2886kB/s)(8916KiB/3164msec) 00:34:28.036 slat (usec): min=6, max=17784, avg=56.82, stdev=645.61 00:34:28.036 clat (usec): min=457, max=41663, avg=1342.27, stdev=2947.76 00:34:28.036 lat (usec): min=483, max=41688, avg=1399.10, stdev=3014.23 00:34:28.036 clat percentiles (usec): 00:34:28.036 | 1.00th=[ 660], 5.00th=[ 742], 10.00th=[ 799], 20.00th=[ 889], 00:34:28.036 | 30.00th=[ 1037], 40.00th=[ 1139], 50.00th=[ 1205], 60.00th=[ 1237], 00:34:28.036 | 70.00th=[ 1254], 80.00th=[ 1287], 90.00th=[ 1336], 95.00th=[ 1369], 00:34:28.036 | 99.00th=[ 1467], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:34:28.037 | 99.99th=[41681] 00:34:28.037 bw ( KiB/s): min= 1696, max= 3602, per=43.45%, avg=2827.00, stdev=782.31, samples=6 00:34:28.037 iops : min= 424, max= 900, avg=706.67, stdev=195.48, samples=6 00:34:28.037 lat (usec) : 500=0.04%, 750=5.11%, 1000=22.24% 00:34:28.037 lat (msec) : 2=71.93%, 10=0.09%, 50=0.54% 00:34:28.037 cpu : usr=0.79%, sys=2.06%, ctx=2238, majf=0, minf=2 00:34:28.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.037 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.037 issued rwts: total=2230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.037 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=916452: Wed Nov 6 13:58:51 2024 00:34:28.037 read: IOPS=892, BW=3569KiB/s (3655kB/s)(9912KiB/2777msec) 00:34:28.037 slat (usec): min=7, max=21714, avg=39.46, stdev=490.20 00:34:28.037 clat (usec): min=427, max=2369, avg=1063.85, stdev=138.49 00:34:28.037 lat (usec): min=453, max=22492, avg=1103.31, stdev=503.76 00:34:28.037 clat percentiles (usec): 00:34:28.037 | 1.00th=[ 660], 5.00th=[ 783], 10.00th=[ 848], 20.00th=[ 979], 00:34:28.037 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:34:28.037 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:34:28.037 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1434], 99.95th=[ 1713], 00:34:28.037 | 99.99th=[ 2376] 00:34:28.037 bw ( KiB/s): min= 3576, max= 3688, per=55.74%, avg=3627.20, stdev=56.17, samples=5 00:34:28.037 iops : min= 894, max= 922, avg=906.80, stdev=14.04, samples=5 00:34:28.037 lat (usec) : 500=0.16%, 750=2.70%, 1000=19.28% 00:34:28.037 lat (msec) : 2=77.77%, 4=0.04% 00:34:28.037 cpu : usr=0.97%, sys=2.70%, ctx=2481, majf=0, minf=2 00:34:28.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.037 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.037 issued rwts: total=2479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.037 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=916453: Wed Nov 6 13:58:51 2024 00:34:28.037 read: IOPS=132, BW=527KiB/s (539kB/s)(1364KiB/2589msec) 00:34:28.037 slat (nsec): min=5722, max=62632, avg=23661.92, stdev=8020.14 00:34:28.037 clat (usec): min=475, max=41917, avg=7485.96, stdev=14866.18 00:34:28.037 lat (usec): min=493, max=41943, avg=7509.62, stdev=14866.90 00:34:28.037 clat percentiles (usec): 00:34:28.037 | 1.00th=[ 553], 5.00th=[ 619], 10.00th=[ 660], 20.00th=[ 709], 00:34:28.037 | 30.00th=[ 758], 40.00th=[ 799], 50.00th=[ 857], 60.00th=[ 1029], 00:34:28.037 | 70.00th=[ 1139], 80.00th=[ 1237], 90.00th=[41157], 95.00th=[41157], 00:34:28.037 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:28.037 | 99.99th=[41681] 00:34:28.037 bw ( KiB/s): min= 96, max= 1536, per=7.15%, avg=465.60, stdev=609.22, samples=5 00:34:28.037 iops : min= 24, max= 384, avg=116.40, stdev=152.30, samples=5 00:34:28.037 lat (usec) : 500=0.58%, 750=27.19%, 1000=29.53% 00:34:28.037 lat (msec) : 2=25.73%, 20=0.29%, 50=16.37% 00:34:28.037 cpu : usr=0.15%, sys=0.31%, ctx=342, majf=0, minf=2 00:34:28.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.037 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.037 issued rwts: total=342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.037 00:34:28.037 Run status group 0 (all jobs): 00:34:28.037 READ: bw=6507KiB/s (6663kB/s), 133KiB/s-3569KiB/s (136kB/s-3655kB/s), io=20.1MiB (21.1MB), run=2589-3164msec 00:34:28.037 00:34:28.037 Disk stats (read/write): 00:34:28.037 nvme0n1: ios=96/0, merge=0/0, ticks=2780/0, in_queue=2780, util=92.99% 00:34:28.037 nvme0n2: ios=2195/0, merge=0/0, ticks=2884/0, in_queue=2884, util=93.62% 00:34:28.037 nvme0n3: ios=2348/0, merge=0/0, ticks=2467/0, in_queue=2467, util=96.03% 00:34:28.037 nvme0n4: ios=342/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.05% 00:34:28.037 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.037 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:28.297 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.297 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:28.558 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.558 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:28.819 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.819 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:28.819 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:28.819 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 916201 00:34:28.819 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:28.819 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:29.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:29.080 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:29.080 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:34:29.080 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:29.080 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:29.080 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:29.080 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:29.080 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:34:29.080 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:29.080 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:29.080 nvmf hotplug test: fio failed as expected 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.081 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.081 rmmod nvme_tcp 00:34:29.342 rmmod nvme_fabrics 00:34:29.342 rmmod nvme_keyring 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 912858 ']' 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 912858 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 912858 ']' 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 912858 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 912858 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 912858' 00:34:29.342 killing process with pid 912858 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 912858 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 912858 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:29.342 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:29.604 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.604 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.604 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.604 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.604 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.518 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:31.518 00:34:31.518 real 0m28.033s 00:34:31.518 user 2m19.688s 00:34:31.518 sys 0m12.363s 00:34:31.518 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:31.518 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.518 ************************************ 00:34:31.518 END TEST nvmf_fio_target 00:34:31.518 ************************************ 00:34:31.518 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:31.518 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:31.518 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:31.518 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:31.518 ************************************ 00:34:31.518 START TEST nvmf_bdevio 00:34:31.518 ************************************ 00:34:31.518 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:31.779 * Looking for test storage... 00:34:31.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:31.779 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:31.779 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:31.779 13:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:31.779 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:31.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.780 --rc genhtml_branch_coverage=1 00:34:31.780 --rc genhtml_function_coverage=1 00:34:31.780 --rc genhtml_legend=1 00:34:31.780 --rc geninfo_all_blocks=1 00:34:31.780 --rc geninfo_unexecuted_blocks=1 00:34:31.780 00:34:31.780 ' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:31.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.780 --rc genhtml_branch_coverage=1 00:34:31.780 --rc genhtml_function_coverage=1 00:34:31.780 --rc genhtml_legend=1 00:34:31.780 --rc geninfo_all_blocks=1 00:34:31.780 --rc geninfo_unexecuted_blocks=1 00:34:31.780 00:34:31.780 ' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:31.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.780 --rc genhtml_branch_coverage=1 00:34:31.780 --rc genhtml_function_coverage=1 00:34:31.780 --rc genhtml_legend=1 00:34:31.780 --rc geninfo_all_blocks=1 00:34:31.780 --rc geninfo_unexecuted_blocks=1 00:34:31.780 00:34:31.780 ' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:31.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.780 --rc genhtml_branch_coverage=1 00:34:31.780 --rc genhtml_function_coverage=1 00:34:31.780 --rc genhtml_legend=1 00:34:31.780 --rc geninfo_all_blocks=1 00:34:31.780 --rc geninfo_unexecuted_blocks=1 00:34:31.780 00:34:31.780 ' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:31.780 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:31.781 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:39.926 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:39.926 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:39.926 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:39.926 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.926 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.927 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:39.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:34:39.927 00:34:39.927 --- 10.0.0.2 ping statistics --- 00:34:39.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.927 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:34:39.927 00:34:39.927 --- 10.0.0.1 ping statistics --- 00:34:39.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.927 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=921473 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 921473 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 921473 ']' 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:39.927 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.927 [2024-11-06 13:59:02.298068] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:39.927 [2024-11-06 13:59:02.299220] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:34:39.927 [2024-11-06 13:59:02.299273] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.927 [2024-11-06 13:59:02.398810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:39.927 [2024-11-06 13:59:02.450579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.927 [2024-11-06 13:59:02.450633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.927 [2024-11-06 13:59:02.450641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.927 [2024-11-06 13:59:02.450649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.927 [2024-11-06 13:59:02.450655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.927 [2024-11-06 13:59:02.452767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:39.927 [2024-11-06 13:59:02.452921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:39.927 [2024-11-06 13:59:02.453161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:39.927 [2024-11-06 13:59:02.453164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:39.927 [2024-11-06 13:59:02.527587] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:39.927 [2024-11-06 13:59:02.528989] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:39.927 [2024-11-06 13:59:02.529307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:39.927 [2024-11-06 13:59:02.529797] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:39.927 [2024-11-06 13:59:02.529841] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.927 [2024-11-06 13:59:03.130172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.927 Malloc0 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.927 [2024-11-06 13:59:03.218406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.927 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:39.928 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:39.928 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:39.928 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:39.928 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:39.928 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:39.928 { 00:34:39.928 "params": { 00:34:39.928 "name": "Nvme$subsystem", 00:34:39.928 "trtype": "$TEST_TRANSPORT", 00:34:39.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.928 "adrfam": "ipv4", 00:34:39.928 "trsvcid": "$NVMF_PORT", 00:34:39.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.928 "hdgst": ${hdgst:-false}, 00:34:39.928 "ddgst": ${ddgst:-false} 00:34:39.928 }, 00:34:39.928 "method": "bdev_nvme_attach_controller" 00:34:39.928 } 00:34:39.928 EOF 00:34:39.928 )") 00:34:39.928 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:39.928 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:39.928 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:39.928 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:39.928 "params": { 00:34:39.928 "name": "Nvme1", 00:34:39.928 "trtype": "tcp", 00:34:39.928 "traddr": "10.0.0.2", 00:34:39.928 "adrfam": "ipv4", 00:34:39.928 "trsvcid": "4420", 00:34:39.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:39.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:39.928 "hdgst": false, 00:34:39.928 "ddgst": false 00:34:39.928 }, 00:34:39.928 "method": "bdev_nvme_attach_controller" 00:34:39.928 }' 00:34:39.928 [2024-11-06 13:59:03.283976] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:34:39.928 [2024-11-06 13:59:03.284023] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921552 ] 00:34:40.189 [2024-11-06 13:59:03.354362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:40.189 [2024-11-06 13:59:03.392768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.189 [2024-11-06 13:59:03.392969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.189 [2024-11-06 13:59:03.392850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:40.451 I/O targets: 00:34:40.451 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:40.451 00:34:40.451 00:34:40.451 CUnit - A unit testing framework for C - Version 2.1-3 00:34:40.451 http://cunit.sourceforge.net/ 00:34:40.451 00:34:40.451 00:34:40.451 Suite: bdevio tests on: Nvme1n1 00:34:40.451 Test: blockdev write read block ...passed 00:34:40.451 Test: blockdev write zeroes read block ...passed 00:34:40.451 Test: blockdev write zeroes read no split ...passed 00:34:40.451 Test: blockdev write zeroes read split ...passed 00:34:40.451 Test: blockdev write zeroes read split partial ...passed 00:34:40.451 Test: blockdev reset ...[2024-11-06 13:59:03.819266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:40.451 [2024-11-06 13:59:03.819335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233b970 (9): Bad file descriptor 00:34:40.712 [2024-11-06 13:59:03.872418] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:40.712 passed 00:34:40.712 Test: blockdev write read 8 blocks ...passed 00:34:40.712 Test: blockdev write read size > 128k ...passed 00:34:40.712 Test: blockdev write read invalid size ...passed 00:34:40.712 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:40.712 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:40.712 Test: blockdev write read max offset ...passed 00:34:40.712 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:40.712 Test: blockdev writev readv 8 blocks ...passed 00:34:40.972 Test: blockdev writev readv 30 x 1block ...passed 00:34:40.972 Test: blockdev writev readv block ...passed 00:34:40.972 Test: blockdev writev readv size > 128k ...passed 00:34:40.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:40.972 Test: blockdev comparev and writev ...[2024-11-06 13:59:04.135427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:40.972 [2024-11-06 13:59:04.135452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.972 [2024-11-06 13:59:04.135463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:40.972 [2024-11-06 13:59:04.135469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.972 [2024-11-06 13:59:04.136020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:40.972 [2024-11-06 13:59:04.136029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.972 [2024-11-06 13:59:04.136039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:40.972 [2024-11-06 13:59:04.136045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.972 [2024-11-06 13:59:04.136586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:40.972 [2024-11-06 13:59:04.136594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.972 [2024-11-06 13:59:04.136603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:40.973 [2024-11-06 13:59:04.136609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.973 [2024-11-06 13:59:04.137168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:40.973 [2024-11-06 13:59:04.137176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.973 [2024-11-06 13:59:04.137185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:40.973 [2024-11-06 13:59:04.137190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.973 passed 00:34:40.973 Test: blockdev nvme passthru rw ...passed 00:34:40.973 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:59:04.220464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:40.973 [2024-11-06 13:59:04.220478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.973 [2024-11-06 13:59:04.220685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:40.973 [2024-11-06 13:59:04.220692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.973 [2024-11-06 13:59:04.221064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:40.973 [2024-11-06 13:59:04.221072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.973 [2024-11-06 13:59:04.221394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:40.973 [2024-11-06 13:59:04.221401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.973 passed 00:34:40.973 Test: blockdev nvme admin passthru ...passed 00:34:40.973 Test: blockdev copy ...passed 00:34:40.973 00:34:40.973 Run Summary: Type Total Ran Passed Failed Inactive 00:34:40.973 suites 1 1 n/a 0 0 00:34:40.973 tests 23 23 23 0 0 00:34:40.973 asserts 152 152 152 0 n/a 00:34:40.973 00:34:40.973 Elapsed time = 1.182 seconds 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.234 rmmod nvme_tcp 00:34:41.234 rmmod nvme_fabrics 00:34:41.234 rmmod nvme_keyring 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 921473 ']' 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 921473 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 921473 ']' 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 921473 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 921473 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 921473' 00:34:41.234 killing process with pid 921473 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 921473 00:34:41.234 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 921473 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.495 13:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.408 13:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:43.408 00:34:43.408 real 0m11.890s 00:34:43.408 user 0m9.747s 00:34:43.408 sys 0m6.320s 00:34:43.408 13:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:43.408 13:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.408 ************************************ 00:34:43.408 END TEST nvmf_bdevio 00:34:43.408 ************************************ 00:34:43.670 13:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:43.670 00:34:43.670 real 4m55.867s 00:34:43.670 user 10m19.945s 00:34:43.670 sys 2m2.434s 00:34:43.670 13:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:43.670 13:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:43.670 ************************************ 00:34:43.670 END TEST nvmf_target_core_interrupt_mode 00:34:43.670 ************************************ 00:34:43.670 13:59:06 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:43.670 13:59:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:43.670 13:59:06 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:43.670 13:59:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.670 ************************************ 00:34:43.670 START TEST nvmf_interrupt 00:34:43.670 ************************************ 00:34:43.670 13:59:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:43.670 * Looking for test storage... 00:34:43.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:43.670 13:59:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:43.670 13:59:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:43.670 13:59:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.931 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:43.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.932 --rc genhtml_branch_coverage=1 00:34:43.932 --rc genhtml_function_coverage=1 00:34:43.932 --rc genhtml_legend=1 00:34:43.932 --rc geninfo_all_blocks=1 00:34:43.932 --rc geninfo_unexecuted_blocks=1 00:34:43.932 00:34:43.932 ' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:43.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.932 --rc genhtml_branch_coverage=1 00:34:43.932 --rc genhtml_function_coverage=1 00:34:43.932 --rc genhtml_legend=1 00:34:43.932 --rc geninfo_all_blocks=1 00:34:43.932 --rc geninfo_unexecuted_blocks=1 00:34:43.932 00:34:43.932 ' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:43.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.932 --rc genhtml_branch_coverage=1 00:34:43.932 --rc genhtml_function_coverage=1 00:34:43.932 --rc genhtml_legend=1 00:34:43.932 --rc geninfo_all_blocks=1 00:34:43.932 --rc geninfo_unexecuted_blocks=1 00:34:43.932 00:34:43.932 ' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:43.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.932 --rc genhtml_branch_coverage=1 00:34:43.932 --rc genhtml_function_coverage=1 00:34:43.932 --rc genhtml_legend=1 00:34:43.932 --rc geninfo_all_blocks=1 00:34:43.932 --rc geninfo_unexecuted_blocks=1 00:34:43.932 00:34:43.932 ' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:43.932 13:59:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:50.523 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.523 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:50.523 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:50.524 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:50.524 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.524 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.785 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.785 13:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:50.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:34:50.785 00:34:50.785 --- 10.0.0.2 ping statistics --- 00:34:50.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.785 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:34:50.785 00:34:50.785 --- 10.0.0.1 ping statistics --- 00:34:50.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.785 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:50.785 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:51.046 13:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:51.046 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:51.046 13:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:51.046 13:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:51.046 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=925936 00:34:51.047 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 925936 00:34:51.047 13:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:51.047 13:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 925936 ']' 00:34:51.047 13:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.047 13:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:51.047 13:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.047 13:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:51.047 13:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:51.047 [2024-11-06 13:59:14.244419] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:51.047 [2024-11-06 13:59:14.245605] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:34:51.047 [2024-11-06 13:59:14.245656] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:51.047 [2024-11-06 13:59:14.330218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:51.047 [2024-11-06 13:59:14.372619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:51.047 [2024-11-06 13:59:14.372654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:51.047 [2024-11-06 13:59:14.372662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:51.047 [2024-11-06 13:59:14.372669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:51.047 [2024-11-06 13:59:14.372675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:51.047 [2024-11-06 13:59:14.373930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.047 [2024-11-06 13:59:14.373950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.308 [2024-11-06 13:59:14.430635] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:51.308 [2024-11-06 13:59:14.431075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:51.308 [2024-11-06 13:59:14.431431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:51.880 5000+0 records in 00:34:51.880 5000+0 records out 00:34:51.880 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184697 s, 554 MB/s 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:51.880 AIO0 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:51.880 [2024-11-06 13:59:15.138516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:51.880 [2024-11-06 13:59:15.178808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 925936 0 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 925936 0 idle 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=925936 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 925936 -w 256 00:34:51.880 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 925936 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.25 reactor_0' 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 925936 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.25 reactor_0 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 925936 1 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 925936 1 idle 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=925936 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:52.141 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 925936 -w 256 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 925985 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 925985 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=926233 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 925936 0 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 925936 0 busy 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=925936 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 925936 -w 256 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 925936 root 20 0 128.2g 44928 32256 R 80.0 0.0 0:00.38 reactor_0' 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 925936 root 20 0 128.2g 44928 32256 R 80.0 0.0 0:00.38 reactor_0 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=80.0 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=80 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 925936 1 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 925936 1 busy 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=925936 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 925936 -w 256 00:34:52.403 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 925985 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.25 reactor_1' 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 925985 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.25 reactor_1 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:52.664 13:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 926233 00:35:02.660 Initializing NVMe Controllers 00:35:02.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:02.660 Controller IO queue size 256, less than required. 00:35:02.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:02.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:02.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:02.661 Initialization complete. Launching workers. 00:35:02.661 ======================================================== 00:35:02.661 Latency(us) 00:35:02.661 Device Information : IOPS MiB/s Average min max 00:35:02.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16669.00 65.11 15367.91 2410.78 21396.23 00:35:02.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18802.50 73.45 13617.03 7447.16 29225.71 00:35:02.661 ======================================================== 00:35:02.661 Total : 35471.50 138.56 14439.82 2410.78 29225.71 00:35:02.661 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 925936 0 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 925936 0 idle 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=925936 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 925936 -w 256 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 925936 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.25 reactor_0' 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 925936 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.25 reactor_0 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 925936 1 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 925936 1 idle 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=925936 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 925936 -w 256 00:35:02.661 13:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:02.920 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 925985 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:02.920 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 925985 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:02.920 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:02.920 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:02.920 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:02.920 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:02.920 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:02.920 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:02.920 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:02.921 13:59:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:02.921 13:59:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:03.489 13:59:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:03.489 13:59:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:35:03.489 13:59:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:03.489 13:59:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:35:03.489 13:59:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 925936 0 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 925936 0 idle 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=925936 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 925936 -w 256 00:35:05.401 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 925936 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0' 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 925936 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 925936 1 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 925936 1 idle 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=925936 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 925936 -w 256 00:35:05.662 13:59:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 925985 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.12 reactor_1' 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 925985 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.12 reactor_1 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:05.662 13:59:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:05.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.922 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:06.182 rmmod nvme_tcp 00:35:06.182 rmmod nvme_fabrics 00:35:06.182 rmmod nvme_keyring 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 925936 ']' 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 925936 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 925936 ']' 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 925936 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 925936 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 925936' 00:35:06.182 killing process with pid 925936 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 925936 00:35:06.182 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 925936 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:06.442 13:59:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.353 13:59:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:08.353 00:35:08.353 real 0m24.752s 00:35:08.353 user 0m40.365s 00:35:08.353 sys 0m8.969s 00:35:08.353 13:59:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:08.353 13:59:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:08.353 ************************************ 00:35:08.353 END TEST nvmf_interrupt 00:35:08.353 ************************************ 00:35:08.353 00:35:08.353 real 30m1.006s 00:35:08.353 user 61m55.176s 00:35:08.353 sys 10m4.920s 00:35:08.353 13:59:31 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:08.353 13:59:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.353 ************************************ 00:35:08.353 END TEST nvmf_tcp 00:35:08.353 ************************************ 00:35:08.353 13:59:31 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:08.353 13:59:31 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:08.353 13:59:31 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:08.353 13:59:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:08.353 13:59:31 -- common/autotest_common.sh@10 -- # set +x 00:35:08.614 ************************************ 00:35:08.614 START TEST spdkcli_nvmf_tcp 00:35:08.614 ************************************ 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:08.614 * Looking for test storage... 00:35:08.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.614 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:08.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.615 --rc genhtml_branch_coverage=1 00:35:08.615 --rc genhtml_function_coverage=1 00:35:08.615 --rc genhtml_legend=1 00:35:08.615 --rc geninfo_all_blocks=1 00:35:08.615 --rc geninfo_unexecuted_blocks=1 00:35:08.615 00:35:08.615 ' 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:08.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.615 --rc genhtml_branch_coverage=1 00:35:08.615 --rc genhtml_function_coverage=1 00:35:08.615 --rc genhtml_legend=1 00:35:08.615 --rc geninfo_all_blocks=1 00:35:08.615 --rc geninfo_unexecuted_blocks=1 00:35:08.615 00:35:08.615 ' 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:08.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.615 --rc genhtml_branch_coverage=1 00:35:08.615 --rc genhtml_function_coverage=1 00:35:08.615 --rc genhtml_legend=1 00:35:08.615 --rc geninfo_all_blocks=1 00:35:08.615 --rc geninfo_unexecuted_blocks=1 00:35:08.615 00:35:08.615 ' 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:08.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.615 --rc genhtml_branch_coverage=1 00:35:08.615 --rc genhtml_function_coverage=1 00:35:08.615 --rc genhtml_legend=1 00:35:08.615 --rc geninfo_all_blocks=1 00:35:08.615 --rc geninfo_unexecuted_blocks=1 00:35:08.615 00:35:08.615 ' 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:08.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:08.615 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=929472 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 929472 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 929472 ']' 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:08.875 13:59:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.875 [2024-11-06 13:59:32.045811] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:35:08.875 [2024-11-06 13:59:32.045883] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929472 ] 00:35:08.875 [2024-11-06 13:59:32.121078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:08.875 [2024-11-06 13:59:32.164446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.875 [2024-11-06 13:59:32.164449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:09.815 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:09.815 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:09.815 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:09.815 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:09.815 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:09.815 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:09.815 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:09.815 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:09.816 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:09.816 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:09.816 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:09.816 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:09.816 ' 00:35:12.359 [2024-11-06 13:59:35.587788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.743 [2024-11-06 13:59:36.956191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:16.285 [2024-11-06 13:59:39.483609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:18.828 [2024-11-06 13:59:41.690263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:20.210 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:20.210 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:20.210 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:20.210 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:20.210 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:20.210 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:20.210 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:20.210 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:20.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:20.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:20.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:20.210 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:20.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:20.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:20.210 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:20.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:20.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:20.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:20.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:20.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:20.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:20.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:20.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:20.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:20.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:20.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:20.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:20.211 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:20.211 13:59:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:20.211 13:59:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:20.211 13:59:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:20.211 13:59:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:20.211 13:59:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:20.211 13:59:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:20.211 13:59:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:20.211 13:59:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:20.781 13:59:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:20.781 13:59:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:20.781 13:59:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:20.781 13:59:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:20.781 13:59:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:20.782 13:59:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:20.782 13:59:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:20.782 13:59:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:20.782 13:59:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:20.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:20.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:20.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:20.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:20.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:20.782 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:20.782 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:20.782 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:20.782 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:20.782 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:20.782 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:20.782 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:20.782 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:20.782 ' 00:35:26.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:26.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:26.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:26.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:26.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:26.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:26.068 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:26.068 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:26.068 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:26.068 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:26.068 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:26.068 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:26.068 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:26.068 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 929472 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 929472 ']' 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 929472 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 929472 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 929472' 00:35:26.068 killing process with pid 929472 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 929472 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 929472 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 929472 ']' 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 929472 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 929472 ']' 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 929472 00:35:26.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (929472) - No such process 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 929472 is not found' 00:35:26.068 Process with pid 929472 is not found 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:26.068 00:35:26.068 real 0m17.489s 00:35:26.068 user 0m38.051s 00:35:26.068 sys 0m0.764s 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:26.068 13:59:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:26.068 ************************************ 00:35:26.068 END TEST spdkcli_nvmf_tcp 00:35:26.068 ************************************ 00:35:26.068 13:59:49 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:26.068 13:59:49 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:26.068 13:59:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:26.068 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:35:26.068 ************************************ 00:35:26.068 START TEST nvmf_identify_passthru 00:35:26.068 ************************************ 00:35:26.068 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:26.068 * Looking for test storage... 00:35:26.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:26.068 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:26.068 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:26.068 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:26.331 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:26.331 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:26.331 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:26.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.331 --rc genhtml_branch_coverage=1 00:35:26.331 --rc genhtml_function_coverage=1 00:35:26.331 --rc genhtml_legend=1 00:35:26.331 --rc geninfo_all_blocks=1 00:35:26.331 --rc geninfo_unexecuted_blocks=1 00:35:26.331 00:35:26.331 ' 00:35:26.331 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:26.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.331 --rc genhtml_branch_coverage=1 00:35:26.331 --rc genhtml_function_coverage=1 00:35:26.331 --rc genhtml_legend=1 00:35:26.331 --rc geninfo_all_blocks=1 00:35:26.331 --rc geninfo_unexecuted_blocks=1 00:35:26.331 00:35:26.331 ' 00:35:26.331 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:26.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.331 --rc genhtml_branch_coverage=1 00:35:26.331 --rc genhtml_function_coverage=1 00:35:26.331 --rc genhtml_legend=1 00:35:26.331 --rc geninfo_all_blocks=1 00:35:26.331 --rc geninfo_unexecuted_blocks=1 00:35:26.331 00:35:26.331 ' 00:35:26.331 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:26.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.331 --rc genhtml_branch_coverage=1 00:35:26.331 --rc genhtml_function_coverage=1 00:35:26.331 --rc genhtml_legend=1 00:35:26.331 --rc geninfo_all_blocks=1 00:35:26.331 --rc geninfo_unexecuted_blocks=1 00:35:26.331 00:35:26.331 ' 00:35:26.331 13:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.331 13:59:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.331 13:59:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.331 13:59:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.331 13:59:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:26.331 13:59:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:26.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:26.331 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:26.331 13:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.331 13:59:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.332 13:59:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.332 13:59:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.332 13:59:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.332 13:59:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.332 13:59:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.332 13:59:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:26.332 13:59:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.332 13:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:26.332 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:26.332 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.332 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:26.332 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:26.332 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:26.332 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.332 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:26.332 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.332 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:26.332 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:26.332 13:59:49 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:26.332 13:59:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:34.474 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.474 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:34.474 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:34.474 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:34.474 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:34.474 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:34.474 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:34.474 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:34.475 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:34.475 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:34.475 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:34.475 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:34.475 13:59:56 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:34.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:35:34.475 00:35:34.475 --- 10.0.0.2 ping statistics --- 00:35:34.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.475 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:35:34.475 00:35:34.475 --- 10.0.0.1 ping statistics --- 00:35:34.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.475 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:34.475 13:59:57 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:34.475 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:34.475 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:34.475 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:34.476 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:34.476 13:59:57 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:34.476 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:34.476 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:34.476 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:34.476 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:34.476 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:34.476 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:34.476 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:34.476 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:34.476 13:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:35.045 13:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:35.045 13:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.045 13:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.045 13:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=936814 00:35:35.045 13:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:35.045 13:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:35.045 13:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 936814 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 936814 ']' 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:35.045 13:59:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.045 [2024-11-06 13:59:58.260287] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:35:35.046 [2024-11-06 13:59:58.260345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.046 [2024-11-06 13:59:58.338527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:35.046 [2024-11-06 13:59:58.377766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.046 [2024-11-06 13:59:58.377802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.046 [2024-11-06 13:59:58.377810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.046 [2024-11-06 13:59:58.377817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.046 [2024-11-06 13:59:58.377823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.046 [2024-11-06 13:59:58.379557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.046 [2024-11-06 13:59:58.379674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.046 [2024-11-06 13:59:58.379813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.046 [2024-11-06 13:59:58.379814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:35:36.058 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.058 INFO: Log level set to 20 00:35:36.058 INFO: Requests: 00:35:36.058 { 00:35:36.058 "jsonrpc": "2.0", 00:35:36.058 "method": "nvmf_set_config", 00:35:36.058 "id": 1, 00:35:36.058 "params": { 00:35:36.058 "admin_cmd_passthru": { 00:35:36.058 "identify_ctrlr": true 00:35:36.058 } 00:35:36.058 } 00:35:36.058 } 00:35:36.058 00:35:36.058 INFO: response: 00:35:36.058 { 00:35:36.058 "jsonrpc": "2.0", 00:35:36.058 "id": 1, 00:35:36.058 "result": true 00:35:36.058 } 00:35:36.058 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.058 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.058 INFO: Setting log level to 20 00:35:36.058 INFO: Setting log level to 20 00:35:36.058 INFO: Log level set to 20 00:35:36.058 INFO: Log level set to 20 00:35:36.058 INFO: Requests: 00:35:36.058 { 00:35:36.058 "jsonrpc": "2.0", 00:35:36.058 "method": "framework_start_init", 00:35:36.058 "id": 1 00:35:36.058 } 00:35:36.058 00:35:36.058 INFO: Requests: 00:35:36.058 { 00:35:36.058 "jsonrpc": "2.0", 00:35:36.058 "method": "framework_start_init", 00:35:36.058 "id": 1 00:35:36.058 } 00:35:36.058 00:35:36.058 [2024-11-06 13:59:59.140270] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:36.058 INFO: response: 00:35:36.058 { 00:35:36.058 "jsonrpc": "2.0", 00:35:36.058 "id": 1, 00:35:36.058 "result": true 00:35:36.058 } 00:35:36.058 00:35:36.058 INFO: response: 00:35:36.058 { 00:35:36.058 "jsonrpc": "2.0", 00:35:36.058 "id": 1, 00:35:36.058 "result": true 00:35:36.058 } 00:35:36.058 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.058 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.058 INFO: Setting log level to 40 00:35:36.058 INFO: Setting log level to 40 00:35:36.058 INFO: Setting log level to 40 00:35:36.058 [2024-11-06 13:59:59.153594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.058 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.058 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.058 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.330 Nvme0n1 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.330 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.330 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.330 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.330 [2024-11-06 13:59:59.549034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.330 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.330 [ 00:35:36.330 { 00:35:36.330 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:36.330 "subtype": "Discovery", 00:35:36.330 "listen_addresses": [], 00:35:36.330 "allow_any_host": true, 00:35:36.330 "hosts": [] 00:35:36.330 }, 00:35:36.330 { 00:35:36.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.330 "subtype": "NVMe", 00:35:36.330 "listen_addresses": [ 00:35:36.330 { 00:35:36.330 "trtype": "TCP", 00:35:36.330 "adrfam": "IPv4", 00:35:36.330 "traddr": "10.0.0.2", 00:35:36.330 "trsvcid": "4420" 00:35:36.330 } 00:35:36.330 ], 00:35:36.330 "allow_any_host": true, 00:35:36.330 "hosts": [], 00:35:36.330 "serial_number": "SPDK00000000000001", 00:35:36.330 "model_number": "SPDK bdev Controller", 00:35:36.330 "max_namespaces": 1, 00:35:36.330 "min_cntlid": 1, 00:35:36.330 "max_cntlid": 65519, 00:35:36.330 "namespaces": [ 00:35:36.330 { 00:35:36.330 "nsid": 1, 00:35:36.330 "bdev_name": "Nvme0n1", 00:35:36.330 "name": "Nvme0n1", 00:35:36.330 "nguid": "36344730526054870025384500000044", 00:35:36.330 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:36.330 } 00:35:36.330 ] 00:35:36.330 } 00:35:36.330 ] 00:35:36.330 13:59:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.330 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:36.330 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:36.330 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:36.597 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:36.597 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:36.597 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:36.597 13:59:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:36.857 14:00:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:36.857 14:00:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:36.857 14:00:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:36.857 14:00:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:36.857 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.857 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.857 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.857 14:00:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:36.857 14:00:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.857 rmmod nvme_tcp 00:35:36.857 rmmod nvme_fabrics 00:35:36.857 rmmod nvme_keyring 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 936814 ']' 00:35:36.857 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 936814 00:35:36.857 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 936814 ']' 00:35:36.857 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 936814 00:35:36.857 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:35:36.857 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:36.857 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 936814 00:35:37.119 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:37.119 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:37.119 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 936814' 00:35:37.119 killing process with pid 936814 00:35:37.119 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 936814 00:35:37.119 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 936814 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:37.380 14:00:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.380 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:37.380 14:00:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.296 14:00:02 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:39.296 00:35:39.296 real 0m13.290s 00:35:39.296 user 0m10.744s 00:35:39.296 sys 0m6.774s 00:35:39.296 14:00:02 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:39.296 14:00:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.296 ************************************ 00:35:39.296 END TEST nvmf_identify_passthru 00:35:39.296 ************************************ 00:35:39.296 14:00:02 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:39.296 14:00:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:39.296 14:00:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:39.296 14:00:02 -- common/autotest_common.sh@10 -- # set +x 00:35:39.558 ************************************ 00:35:39.558 START TEST nvmf_dif 00:35:39.558 ************************************ 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:39.558 * Looking for test storage... 00:35:39.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:39.558 14:00:02 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:39.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.558 --rc genhtml_branch_coverage=1 00:35:39.558 --rc genhtml_function_coverage=1 00:35:39.558 --rc genhtml_legend=1 00:35:39.558 --rc geninfo_all_blocks=1 00:35:39.558 --rc geninfo_unexecuted_blocks=1 00:35:39.558 00:35:39.558 ' 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:39.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.558 --rc genhtml_branch_coverage=1 00:35:39.558 --rc genhtml_function_coverage=1 00:35:39.558 --rc genhtml_legend=1 00:35:39.558 --rc geninfo_all_blocks=1 00:35:39.558 --rc geninfo_unexecuted_blocks=1 00:35:39.558 00:35:39.558 ' 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:39.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.558 --rc genhtml_branch_coverage=1 00:35:39.558 --rc genhtml_function_coverage=1 00:35:39.558 --rc genhtml_legend=1 00:35:39.558 --rc geninfo_all_blocks=1 00:35:39.558 --rc geninfo_unexecuted_blocks=1 00:35:39.558 00:35:39.558 ' 00:35:39.558 14:00:02 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:39.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.558 --rc genhtml_branch_coverage=1 00:35:39.558 --rc genhtml_function_coverage=1 00:35:39.558 --rc genhtml_legend=1 00:35:39.558 --rc geninfo_all_blocks=1 00:35:39.558 --rc geninfo_unexecuted_blocks=1 00:35:39.558 00:35:39.558 ' 00:35:39.558 14:00:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:39.558 14:00:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:39.558 14:00:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:39.558 14:00:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:39.558 14:00:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:39.558 14:00:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:39.559 14:00:02 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:39.559 14:00:02 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:39.559 14:00:02 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:39.559 14:00:02 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:39.559 14:00:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.559 14:00:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.559 14:00:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.559 14:00:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:39.559 14:00:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:39.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:39.559 14:00:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:39.559 14:00:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:39.559 14:00:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:39.559 14:00:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:39.559 14:00:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.559 14:00:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:39.559 14:00:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:39.559 14:00:02 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:39.559 14:00:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:46.153 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:46.153 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:46.153 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:46.153 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:46.153 14:00:09 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.415 14:00:09 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.415 14:00:09 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.415 14:00:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:46.415 14:00:09 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:46.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:35:46.415 00:35:46.415 --- 10.0.0.2 ping statistics --- 00:35:46.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.415 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:35:46.415 14:00:09 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:35:46.415 00:35:46.415 --- 10.0.0.1 ping statistics --- 00:35:46.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.415 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:35:46.415 14:00:09 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.415 14:00:09 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:46.415 14:00:09 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:46.415 14:00:09 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:49.722 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:49.722 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:49.722 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:49.722 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:49.722 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:49.723 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:49.723 14:00:12 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:49.723 14:00:12 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:49.723 14:00:12 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:49.723 14:00:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=943239 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 943239 00:35:49.723 14:00:12 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 943239 ']' 00:35:49.723 14:00:12 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.723 14:00:12 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:49.723 14:00:12 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.723 14:00:12 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:49.723 14:00:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:49.723 14:00:12 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:49.723 [2024-11-06 14:00:12.960793] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:35:49.723 [2024-11-06 14:00:12.960847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:49.723 [2024-11-06 14:00:13.039575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.723 [2024-11-06 14:00:13.077352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:49.723 [2024-11-06 14:00:13.077386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:49.723 [2024-11-06 14:00:13.077395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:49.723 [2024-11-06 14:00:13.077401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:49.723 [2024-11-06 14:00:13.077407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:49.723 [2024-11-06 14:00:13.078015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:50.665 14:00:13 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:50.665 14:00:13 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:35:50.666 14:00:13 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:50.666 14:00:13 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:50.666 14:00:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.666 14:00:13 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:50.666 14:00:13 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:50.666 14:00:13 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:50.666 14:00:13 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.666 14:00:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.666 [2024-11-06 14:00:13.786590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:50.666 14:00:13 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.666 14:00:13 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:50.666 14:00:13 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:50.666 14:00:13 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:50.666 14:00:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.666 ************************************ 00:35:50.666 START TEST fio_dif_1_default 00:35:50.666 ************************************ 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.666 bdev_null0 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.666 [2024-11-06 14:00:13.862930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:50.666 { 00:35:50.666 "params": { 00:35:50.666 "name": "Nvme$subsystem", 00:35:50.666 "trtype": "$TEST_TRANSPORT", 00:35:50.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.666 "adrfam": "ipv4", 00:35:50.666 "trsvcid": "$NVMF_PORT", 00:35:50.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.666 "hdgst": ${hdgst:-false}, 00:35:50.666 "ddgst": ${ddgst:-false} 00:35:50.666 }, 00:35:50.666 "method": "bdev_nvme_attach_controller" 00:35:50.666 } 00:35:50.666 EOF 00:35:50.666 )") 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:50.666 "params": { 00:35:50.666 "name": "Nvme0", 00:35:50.666 "trtype": "tcp", 00:35:50.666 "traddr": "10.0.0.2", 00:35:50.666 "adrfam": "ipv4", 00:35:50.666 "trsvcid": "4420", 00:35:50.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.666 "hdgst": false, 00:35:50.666 "ddgst": false 00:35:50.666 }, 00:35:50.666 "method": "bdev_nvme_attach_controller" 00:35:50.666 }' 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:50.666 14:00:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.927 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:50.927 fio-3.35 00:35:50.927 Starting 1 thread 00:36:03.165 00:36:03.165 filename0: (groupid=0, jobs=1): err= 0: pid=943771: Wed Nov 6 14:00:24 2024 00:36:03.165 read: IOPS=190, BW=762KiB/s (780kB/s)(7632KiB/10016msec) 00:36:03.165 slat (nsec): min=5446, max=45225, avg=6412.13, stdev=2020.98 00:36:03.165 clat (usec): min=601, max=42414, avg=20979.03, stdev=20091.50 00:36:03.165 lat (usec): min=606, max=42420, avg=20985.44, stdev=20091.43 00:36:03.165 clat percentiles (usec): 00:36:03.165 | 1.00th=[ 799], 5.00th=[ 898], 10.00th=[ 906], 20.00th=[ 922], 00:36:03.165 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 3687], 60.00th=[41157], 00:36:03.165 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:03.165 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:03.165 | 99.99th=[42206] 00:36:03.165 bw ( KiB/s): min= 704, max= 768, per=99.87%, avg=761.60, stdev=19.70, samples=20 00:36:03.165 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:36:03.165 lat (usec) : 750=0.84%, 1000=48.85% 00:36:03.165 lat (msec) : 2=0.21%, 4=0.21%, 50=49.90% 00:36:03.165 cpu : usr=92.80%, sys=6.98%, ctx=13, majf=0, minf=254 00:36:03.165 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.165 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.165 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:03.165 00:36:03.165 Run status group 0 (all jobs): 00:36:03.165 READ: bw=762KiB/s (780kB/s), 762KiB/s-762KiB/s (780kB/s-780kB/s), io=7632KiB (7815kB), run=10016-10016msec 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 00:36:03.165 real 0m11.101s 00:36:03.165 user 0m24.631s 00:36:03.165 sys 0m1.069s 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 ************************************ 00:36:03.165 END TEST fio_dif_1_default 00:36:03.165 ************************************ 00:36:03.165 14:00:24 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:03.165 14:00:24 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:03.165 14:00:24 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:03.165 14:00:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 ************************************ 00:36:03.165 START TEST fio_dif_1_multi_subsystems 00:36:03.165 ************************************ 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 bdev_null0 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 [2024-11-06 14:00:25.019126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 bdev_null1 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:03.165 { 00:36:03.165 "params": { 00:36:03.165 "name": "Nvme$subsystem", 00:36:03.165 "trtype": "$TEST_TRANSPORT", 00:36:03.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.165 "adrfam": "ipv4", 00:36:03.165 "trsvcid": "$NVMF_PORT", 00:36:03.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.165 "hdgst": ${hdgst:-false}, 00:36:03.165 "ddgst": ${ddgst:-false} 00:36:03.165 }, 00:36:03.165 "method": "bdev_nvme_attach_controller" 00:36:03.165 } 00:36:03.165 EOF 00:36:03.165 )") 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:03.165 { 00:36:03.165 "params": { 00:36:03.165 "name": "Nvme$subsystem", 00:36:03.165 "trtype": "$TEST_TRANSPORT", 00:36:03.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.165 "adrfam": "ipv4", 00:36:03.165 "trsvcid": "$NVMF_PORT", 00:36:03.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.165 "hdgst": ${hdgst:-false}, 00:36:03.165 "ddgst": ${ddgst:-false} 00:36:03.165 }, 00:36:03.165 "method": "bdev_nvme_attach_controller" 00:36:03.165 } 00:36:03.165 EOF 00:36:03.165 )") 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:03.165 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:03.165 "params": { 00:36:03.165 "name": "Nvme0", 00:36:03.165 "trtype": "tcp", 00:36:03.165 "traddr": "10.0.0.2", 00:36:03.165 "adrfam": "ipv4", 00:36:03.165 "trsvcid": "4420", 00:36:03.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.165 "hdgst": false, 00:36:03.165 "ddgst": false 00:36:03.165 }, 00:36:03.165 "method": "bdev_nvme_attach_controller" 00:36:03.165 },{ 00:36:03.165 "params": { 00:36:03.165 "name": "Nvme1", 00:36:03.165 "trtype": "tcp", 00:36:03.165 "traddr": "10.0.0.2", 00:36:03.165 "adrfam": "ipv4", 00:36:03.165 "trsvcid": "4420", 00:36:03.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:03.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:03.165 "hdgst": false, 00:36:03.165 "ddgst": false 00:36:03.165 }, 00:36:03.165 "method": "bdev_nvme_attach_controller" 00:36:03.165 }' 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:03.166 14:00:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.166 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:03.166 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:03.166 fio-3.35 00:36:03.166 Starting 2 threads 00:36:13.161 00:36:13.161 filename0: (groupid=0, jobs=1): err= 0: pid=945970: Wed Nov 6 14:00:36 2024 00:36:13.161 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10010msec) 00:36:13.161 slat (nsec): min=5520, max=40747, avg=7044.80, stdev=2362.90 00:36:13.161 clat (usec): min=914, max=43139, avg=41339.77, stdev=2673.80 00:36:13.161 lat (usec): min=920, max=43174, avg=41346.81, stdev=2673.90 00:36:13.161 clat percentiles (usec): 00:36:13.161 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:13.161 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:36:13.161 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:36:13.161 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:36:13.161 | 99.99th=[43254] 00:36:13.161 bw ( KiB/s): min= 384, max= 416, per=33.59%, avg=385.60, stdev= 7.16, samples=20 00:36:13.161 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:36:13.161 lat (usec) : 1000=0.41% 00:36:13.161 lat (msec) : 50=99.59% 00:36:13.162 cpu : usr=96.65%, sys=3.10%, ctx=17, majf=0, minf=214 00:36:13.162 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.162 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.162 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:13.162 filename1: (groupid=0, jobs=1): err= 0: pid=945971: Wed Nov 6 14:00:36 2024 00:36:13.162 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10005msec) 00:36:13.162 slat (nsec): min=5503, max=29081, avg=6677.30, stdev=1768.56 00:36:13.162 clat (usec): min=615, max=43026, avg=21042.91, stdev=20163.35 00:36:13.162 lat (usec): min=621, max=43034, avg=21049.59, stdev=20163.14 00:36:13.162 clat percentiles (usec): 00:36:13.162 | 1.00th=[ 709], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 922], 00:36:13.162 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 2442], 60.00th=[41157], 00:36:13.162 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:13.162 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:36:13.162 | 99.99th=[43254] 00:36:13.162 bw ( KiB/s): min= 704, max= 768, per=66.14%, avg=758.40, stdev=23.45, samples=20 00:36:13.162 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:36:13.162 lat (usec) : 750=1.47%, 1000=47.63% 00:36:13.162 lat (msec) : 2=0.79%, 4=0.21%, 50=49.89% 00:36:13.162 cpu : usr=96.83%, sys=2.91%, ctx=26, majf=0, minf=46 00:36:13.162 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.162 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.162 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:13.162 00:36:13.162 Run status group 0 (all jobs): 00:36:13.162 READ: bw=1146KiB/s (1174kB/s), 387KiB/s-760KiB/s (396kB/s-778kB/s), io=11.2MiB (11.7MB), run=10005-10010msec 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.162 00:36:13.162 real 0m11.540s 00:36:13.162 user 0m34.329s 00:36:13.162 sys 0m0.960s 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:13.162 14:00:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.162 ************************************ 00:36:13.162 END TEST fio_dif_1_multi_subsystems 00:36:13.162 ************************************ 00:36:13.426 14:00:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:13.426 14:00:36 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:13.426 14:00:36 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:13.426 14:00:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:13.426 ************************************ 00:36:13.426 START TEST fio_dif_rand_params 00:36:13.426 ************************************ 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.426 bdev_null0 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.426 [2024-11-06 14:00:36.640666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:13.426 { 00:36:13.426 "params": { 00:36:13.426 "name": "Nvme$subsystem", 00:36:13.426 "trtype": "$TEST_TRANSPORT", 00:36:13.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.426 "adrfam": "ipv4", 00:36:13.426 "trsvcid": "$NVMF_PORT", 00:36:13.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.426 "hdgst": ${hdgst:-false}, 00:36:13.426 "ddgst": ${ddgst:-false} 00:36:13.426 }, 00:36:13.426 "method": "bdev_nvme_attach_controller" 00:36:13.426 } 00:36:13.426 EOF 00:36:13.426 )") 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:13.426 14:00:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:13.426 "params": { 00:36:13.426 "name": "Nvme0", 00:36:13.426 "trtype": "tcp", 00:36:13.426 "traddr": "10.0.0.2", 00:36:13.426 "adrfam": "ipv4", 00:36:13.427 "trsvcid": "4420", 00:36:13.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:13.427 "hdgst": false, 00:36:13.427 "ddgst": false 00:36:13.427 }, 00:36:13.427 "method": "bdev_nvme_attach_controller" 00:36:13.427 }' 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:13.427 14:00:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.995 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:13.995 ... 00:36:13.995 fio-3.35 00:36:13.995 Starting 3 threads 00:36:20.576 00:36:20.576 filename0: (groupid=0, jobs=1): err= 0: pid=948359: Wed Nov 6 14:00:42 2024 00:36:20.576 read: IOPS=141, BW=17.7MiB/s (18.6MB/s)(89.4MiB/5040msec) 00:36:20.576 slat (nsec): min=5473, max=30536, avg=6226.98, stdev=1242.99 00:36:20.576 clat (usec): min=5417, max=93956, avg=21136.60, stdev=21021.10 00:36:20.576 lat (usec): min=5423, max=93962, avg=21142.82, stdev=21021.19 00:36:20.576 clat percentiles (usec): 00:36:20.576 | 1.00th=[ 6259], 5.00th=[ 7439], 10.00th=[ 8291], 20.00th=[ 9634], 00:36:20.576 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11338], 60.00th=[11863], 00:36:20.576 | 70.00th=[12911], 80.00th=[49546], 90.00th=[51643], 95.00th=[53216], 00:36:20.576 | 99.00th=[91751], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:36:20.576 | 99.99th=[93848] 00:36:20.576 bw ( KiB/s): min=14848, max=25600, per=22.07%, avg=18227.20, stdev=3194.70, samples=10 00:36:20.576 iops : min= 116, max= 200, avg=142.40, stdev=24.96, samples=10 00:36:20.576 lat (msec) : 10=25.31%, 20=52.45%, 50=4.06%, 100=18.18% 00:36:20.576 cpu : usr=95.97%, sys=3.79%, ctx=20, majf=0, minf=79 00:36:20.576 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.576 issued rwts: total=715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.576 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:20.576 filename0: (groupid=0, jobs=1): err= 0: pid=948360: Wed Nov 6 14:00:42 2024 00:36:20.576 read: IOPS=236, BW=29.6MiB/s (31.1MB/s)(148MiB/5005msec) 00:36:20.576 slat (nsec): min=7979, max=31854, avg=8706.64, stdev=1040.94 00:36:20.576 clat (usec): min=5510, max=89436, avg=12647.00, stdev=9551.26 00:36:20.576 lat (usec): min=5519, max=89445, avg=12655.71, stdev=9551.36 00:36:20.576 clat percentiles (usec): 00:36:20.576 | 1.00th=[ 6325], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8717], 00:36:20.576 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10945], 00:36:20.576 | 70.00th=[11994], 80.00th=[12780], 90.00th=[14091], 95.00th=[47449], 00:36:20.576 | 99.00th=[51643], 99.50th=[52691], 99.90th=[55313], 99.95th=[89654], 00:36:20.576 | 99.99th=[89654] 00:36:20.576 bw ( KiB/s): min=20224, max=39936, per=36.67%, avg=30284.80, stdev=7124.23, samples=10 00:36:20.576 iops : min= 158, max= 312, avg=236.60, stdev=55.66, samples=10 00:36:20.576 lat (msec) : 10=46.54%, 20=47.72%, 50=3.46%, 100=2.28% 00:36:20.576 cpu : usr=95.20%, sys=4.54%, ctx=10, majf=0, minf=69 00:36:20.576 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.576 issued rwts: total=1186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.576 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:20.576 filename0: (groupid=0, jobs=1): err= 0: pid=948361: Wed Nov 6 14:00:42 2024 00:36:20.576 read: IOPS=268, BW=33.5MiB/s (35.2MB/s)(169MiB/5045msec) 00:36:20.576 slat (nsec): min=5622, max=31781, avg=7914.82, stdev=1893.26 00:36:20.576 clat (usec): min=4854, max=88944, avg=11135.47, stdev=7206.98 00:36:20.576 lat (usec): min=4863, max=88953, avg=11143.38, stdev=7207.11 00:36:20.576 clat percentiles (usec): 00:36:20.576 | 1.00th=[ 5604], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 7898], 00:36:20.576 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10421], 00:36:20.576 | 70.00th=[11207], 80.00th=[12387], 90.00th=[13698], 95.00th=[15139], 00:36:20.576 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[88605], 00:36:20.576 | 99.99th=[88605] 00:36:20.576 bw ( KiB/s): min=26112, max=38400, per=41.91%, avg=34611.20, stdev=4580.74, samples=10 00:36:20.577 iops : min= 204, max= 300, avg=270.40, stdev=35.79, samples=10 00:36:20.577 lat (msec) : 10=53.40%, 20=43.65%, 50=1.85%, 100=1.11% 00:36:20.577 cpu : usr=95.34%, sys=4.40%, ctx=13, majf=0, minf=108 00:36:20.577 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.577 issued rwts: total=1354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.577 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:20.577 00:36:20.577 Run status group 0 (all jobs): 00:36:20.577 READ: bw=80.6MiB/s (84.6MB/s), 17.7MiB/s-33.5MiB/s (18.6MB/s-35.2MB/s), io=407MiB (427MB), run=5005-5045msec 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 bdev_null0 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 [2024-11-06 14:00:42.911854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 bdev_null1 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 bdev_null2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.577 14:00:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:20.577 { 00:36:20.577 "params": { 00:36:20.577 "name": "Nvme$subsystem", 00:36:20.577 "trtype": "$TEST_TRANSPORT", 00:36:20.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:20.577 "adrfam": "ipv4", 00:36:20.577 "trsvcid": "$NVMF_PORT", 00:36:20.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:20.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:20.577 "hdgst": ${hdgst:-false}, 00:36:20.577 "ddgst": ${ddgst:-false} 00:36:20.577 }, 00:36:20.577 "method": "bdev_nvme_attach_controller" 00:36:20.577 } 00:36:20.577 EOF 00:36:20.577 )") 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:20.577 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:20.578 { 00:36:20.578 "params": { 00:36:20.578 "name": "Nvme$subsystem", 00:36:20.578 "trtype": "$TEST_TRANSPORT", 00:36:20.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:20.578 "adrfam": "ipv4", 00:36:20.578 "trsvcid": "$NVMF_PORT", 00:36:20.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:20.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:20.578 "hdgst": ${hdgst:-false}, 00:36:20.578 "ddgst": ${ddgst:-false} 00:36:20.578 }, 00:36:20.578 "method": "bdev_nvme_attach_controller" 00:36:20.578 } 00:36:20.578 EOF 00:36:20.578 )") 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:20.578 { 00:36:20.578 "params": { 00:36:20.578 "name": "Nvme$subsystem", 00:36:20.578 "trtype": "$TEST_TRANSPORT", 00:36:20.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:20.578 "adrfam": "ipv4", 00:36:20.578 "trsvcid": "$NVMF_PORT", 00:36:20.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:20.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:20.578 "hdgst": ${hdgst:-false}, 00:36:20.578 "ddgst": ${ddgst:-false} 00:36:20.578 }, 00:36:20.578 "method": "bdev_nvme_attach_controller" 00:36:20.578 } 00:36:20.578 EOF 00:36:20.578 )") 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:20.578 "params": { 00:36:20.578 "name": "Nvme0", 00:36:20.578 "trtype": "tcp", 00:36:20.578 "traddr": "10.0.0.2", 00:36:20.578 "adrfam": "ipv4", 00:36:20.578 "trsvcid": "4420", 00:36:20.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:20.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:20.578 "hdgst": false, 00:36:20.578 "ddgst": false 00:36:20.578 }, 00:36:20.578 "method": "bdev_nvme_attach_controller" 00:36:20.578 },{ 00:36:20.578 "params": { 00:36:20.578 "name": "Nvme1", 00:36:20.578 "trtype": "tcp", 00:36:20.578 "traddr": "10.0.0.2", 00:36:20.578 "adrfam": "ipv4", 00:36:20.578 "trsvcid": "4420", 00:36:20.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:20.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:20.578 "hdgst": false, 00:36:20.578 "ddgst": false 00:36:20.578 }, 00:36:20.578 "method": "bdev_nvme_attach_controller" 00:36:20.578 },{ 00:36:20.578 "params": { 00:36:20.578 "name": "Nvme2", 00:36:20.578 "trtype": "tcp", 00:36:20.578 "traddr": "10.0.0.2", 00:36:20.578 "adrfam": "ipv4", 00:36:20.578 "trsvcid": "4420", 00:36:20.578 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:20.578 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:20.578 "hdgst": false, 00:36:20.578 "ddgst": false 00:36:20.578 }, 00:36:20.578 "method": "bdev_nvme_attach_controller" 00:36:20.578 }' 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:20.578 14:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.578 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:20.578 ... 00:36:20.578 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:20.578 ... 00:36:20.578 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:20.578 ... 00:36:20.578 fio-3.35 00:36:20.578 Starting 24 threads 00:36:32.806 00:36:32.806 filename0: (groupid=0, jobs=1): err= 0: pid=949674: Wed Nov 6 14:00:54 2024 00:36:32.806 read: IOPS=506, BW=2028KiB/s (2076kB/s)(19.8MiB/10017msec) 00:36:32.806 slat (nsec): min=5734, max=73828, avg=15333.72, stdev=9881.99 00:36:32.806 clat (usec): min=1449, max=55336, avg=31440.50, stdev=5248.01 00:36:32.806 lat (usec): min=1473, max=55346, avg=31455.84, stdev=5247.79 00:36:32.806 clat percentiles (usec): 00:36:32.806 | 1.00th=[ 3064], 5.00th=[23725], 10.00th=[32113], 20.00th=[32375], 00:36:32.806 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.806 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.806 | 99.00th=[33817], 99.50th=[34866], 99.90th=[43254], 99.95th=[43254], 00:36:32.806 | 99.99th=[55313] 00:36:32.806 bw ( KiB/s): min= 1916, max= 3376, per=4.29%, avg=2029.21, stdev=331.16, samples=19 00:36:32.806 iops : min= 479, max= 844, avg=507.26, stdev=82.79, samples=19 00:36:32.806 lat (msec) : 2=0.24%, 4=1.34%, 10=1.02%, 20=1.85%, 50=95.51% 00:36:32.806 lat (msec) : 100=0.04% 00:36:32.806 cpu : usr=98.75%, sys=0.94%, ctx=13, majf=0, minf=58 00:36:32.806 IO depths : 1=5.8%, 2=11.8%, 4=23.9%, 8=51.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:32.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.806 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.806 issued rwts: total=5078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.806 filename0: (groupid=0, jobs=1): err= 0: pid=949675: Wed Nov 6 14:00:54 2024 00:36:32.806 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10023msec) 00:36:32.806 slat (nsec): min=5605, max=68304, avg=15866.94, stdev=11239.73 00:36:32.806 clat (usec): min=12988, max=57899, avg=31909.07, stdev=4712.00 00:36:32.806 lat (usec): min=12996, max=57904, avg=31924.93, stdev=4713.42 00:36:32.806 clat percentiles (usec): 00:36:32.806 | 1.00th=[14484], 5.00th=[21890], 10.00th=[31327], 20.00th=[32113], 00:36:32.806 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:32.806 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.806 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:36:32.806 | 99.99th=[57934] 00:36:32.806 bw ( KiB/s): min= 1904, max= 2224, per=4.22%, avg=1994.70, stdev=99.22, samples=20 00:36:32.806 iops : min= 476, max= 556, avg=498.60, stdev=24.77, samples=20 00:36:32.806 lat (msec) : 20=4.04%, 50=94.72%, 100=1.24% 00:36:32.806 cpu : usr=99.05%, sys=0.68%, ctx=13, majf=0, minf=47 00:36:32.806 IO depths : 1=3.3%, 2=9.1%, 4=23.5%, 8=55.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:32.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.806 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.806 issued rwts: total=5004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.806 filename0: (groupid=0, jobs=1): err= 0: pid=949676: Wed Nov 6 14:00:54 2024 00:36:32.806 read: IOPS=524, BW=2096KiB/s (2147kB/s)(20.5MiB/10013msec) 00:36:32.806 slat (nsec): min=5607, max=90627, avg=16999.93, stdev=12752.24 00:36:32.806 clat (usec): min=2640, max=57197, avg=30372.22, stdev=6164.78 00:36:32.806 lat (usec): min=2658, max=57209, avg=30389.22, stdev=6167.62 00:36:32.806 clat percentiles (usec): 00:36:32.806 | 1.00th=[ 7963], 5.00th=[17957], 10.00th=[21890], 20.00th=[31327], 00:36:32.806 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:32.806 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:36:32.806 | 99.00th=[49546], 99.50th=[55313], 99.90th=[56886], 99.95th=[57410], 00:36:32.806 | 99.99th=[57410] 00:36:32.806 bw ( KiB/s): min= 1904, max= 2896, per=4.43%, avg=2093.21, stdev=272.59, samples=19 00:36:32.806 iops : min= 476, max= 724, avg=523.26, stdev=68.15, samples=19 00:36:32.806 lat (msec) : 4=0.61%, 10=1.18%, 20=6.84%, 50=90.40%, 100=0.97% 00:36:32.806 cpu : usr=98.82%, sys=0.82%, ctx=57, majf=0, minf=38 00:36:32.806 IO depths : 1=4.3%, 2=9.5%, 4=21.5%, 8=56.5%, 16=8.2%, 32=0.0%, >=64=0.0% 00:36:32.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.806 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.806 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.806 filename0: (groupid=0, jobs=1): err= 0: pid=949677: Wed Nov 6 14:00:54 2024 00:36:32.806 read: IOPS=490, BW=1960KiB/s (2007kB/s)(19.2MiB/10024msec) 00:36:32.806 slat (nsec): min=5634, max=81571, avg=17399.65, stdev=12697.21 00:36:32.806 clat (usec): min=12434, max=51406, avg=32479.03, stdev=1981.53 00:36:32.806 lat (usec): min=12446, max=51412, avg=32496.43, stdev=1981.86 00:36:32.806 clat percentiles (usec): 00:36:32.806 | 1.00th=[21890], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:32.806 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.806 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.806 | 99.00th=[34341], 99.50th=[34866], 99.90th=[51119], 99.95th=[51119], 00:36:32.806 | 99.99th=[51643] 00:36:32.806 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=1959.79, stdev=61.59, samples=19 00:36:32.806 iops : min= 479, max= 512, avg=489.95, stdev=15.40, samples=19 00:36:32.806 lat (msec) : 20=0.73%, 50=99.02%, 100=0.24% 00:36:32.806 cpu : usr=98.93%, sys=0.78%, ctx=13, majf=0, minf=53 00:36:32.806 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:32.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.806 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.806 filename0: (groupid=0, jobs=1): err= 0: pid=949678: Wed Nov 6 14:00:54 2024 00:36:32.806 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10004msec) 00:36:32.806 slat (nsec): min=5629, max=93432, avg=20279.79, stdev=15764.06 00:36:32.806 clat (usec): min=21036, max=40295, avg=32539.47, stdev=858.06 00:36:32.806 lat (usec): min=21044, max=40302, avg=32559.75, stdev=856.65 00:36:32.806 clat percentiles (usec): 00:36:32.806 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:32.806 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.806 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.806 | 99.00th=[33817], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:36:32.806 | 99.99th=[40109] 00:36:32.806 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1953.37, stdev=57.51, samples=19 00:36:32.806 iops : min= 479, max= 512, avg=488.26, stdev=14.34, samples=19 00:36:32.806 lat (msec) : 50=100.00% 00:36:32.807 cpu : usr=98.85%, sys=0.80%, ctx=64, majf=0, minf=47 00:36:32.807 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:32.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.807 filename0: (groupid=0, jobs=1): err= 0: pid=949679: Wed Nov 6 14:00:54 2024 00:36:32.807 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10011msec) 00:36:32.807 slat (nsec): min=5608, max=57393, avg=11373.26, stdev=7635.32 00:36:32.807 clat (usec): min=13652, max=54065, avg=32620.82, stdev=2615.76 00:36:32.807 lat (usec): min=13661, max=54081, avg=32632.19, stdev=2615.32 00:36:32.807 clat percentiles (usec): 00:36:32.807 | 1.00th=[22414], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:32.807 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.807 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:36:32.807 | 99.00th=[42730], 99.50th=[44303], 99.90th=[54264], 99.95th=[54264], 00:36:32.807 | 99.99th=[54264] 00:36:32.807 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1946.47, stdev=65.23, samples=19 00:36:32.807 iops : min= 448, max= 512, avg=486.58, stdev=16.25, samples=19 00:36:32.807 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:36:32.807 cpu : usr=98.63%, sys=0.99%, ctx=68, majf=0, minf=42 00:36:32.807 IO depths : 1=4.2%, 2=10.4%, 4=24.7%, 8=52.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:32.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 issued rwts: total=4894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.807 filename0: (groupid=0, jobs=1): err= 0: pid=949680: Wed Nov 6 14:00:54 2024 00:36:32.807 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10008msec) 00:36:32.807 slat (nsec): min=5628, max=78970, avg=25237.37, stdev=14569.89 00:36:32.807 clat (usec): min=12240, max=87468, avg=32198.31, stdev=3317.11 00:36:32.807 lat (usec): min=12246, max=87488, avg=32223.54, stdev=3318.95 00:36:32.807 clat percentiles (usec): 00:36:32.807 | 1.00th=[19530], 5.00th=[29230], 10.00th=[31851], 20.00th=[32113], 00:36:32.807 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:32.807 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.807 | 99.00th=[35390], 99.50th=[45876], 99.90th=[63701], 99.95th=[63701], 00:36:32.807 | 99.99th=[87557] 00:36:32.807 bw ( KiB/s): min= 1776, max= 2276, per=4.15%, avg=1962.95, stdev=100.11, samples=19 00:36:32.807 iops : min= 444, max= 569, avg=490.74, stdev=25.03, samples=19 00:36:32.807 lat (msec) : 20=1.26%, 50=98.26%, 100=0.49% 00:36:32.807 cpu : usr=98.79%, sys=0.78%, ctx=110, majf=0, minf=35 00:36:32.807 IO depths : 1=3.8%, 2=9.6%, 4=23.4%, 8=54.3%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:32.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.807 filename0: (groupid=0, jobs=1): err= 0: pid=949681: Wed Nov 6 14:00:54 2024 00:36:32.807 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10011msec) 00:36:32.807 slat (nsec): min=5638, max=96583, avg=29147.97, stdev=16321.42 00:36:32.807 clat (usec): min=13894, max=49453, avg=32452.63, stdev=1913.84 00:36:32.807 lat (usec): min=13905, max=49476, avg=32481.77, stdev=1914.48 00:36:32.807 clat percentiles (usec): 00:36:32.807 | 1.00th=[26346], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:32.807 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:32.807 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.807 | 99.00th=[39060], 99.50th=[44827], 99.90th=[47973], 99.95th=[49546], 00:36:32.807 | 99.99th=[49546] 00:36:32.807 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1952.95, stdev=57.16, samples=19 00:36:32.807 iops : min= 479, max= 512, avg=488.16, stdev=14.16, samples=19 00:36:32.807 lat (msec) : 20=0.41%, 50=99.59% 00:36:32.807 cpu : usr=98.55%, sys=1.00%, ctx=121, majf=0, minf=31 00:36:32.807 IO depths : 1=5.0%, 2=11.2%, 4=24.8%, 8=51.6%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:32.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.807 filename1: (groupid=0, jobs=1): err= 0: pid=949682: Wed Nov 6 14:00:54 2024 00:36:32.807 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10011msec) 00:36:32.807 slat (usec): min=5, max=107, avg=29.01, stdev=18.24 00:36:32.807 clat (usec): min=17623, max=56840, avg=32369.81, stdev=2698.20 00:36:32.807 lat (usec): min=17629, max=56851, avg=32398.82, stdev=2699.78 00:36:32.807 clat percentiles (usec): 00:36:32.807 | 1.00th=[21365], 5.00th=[31327], 10.00th=[31851], 20.00th=[32113], 00:36:32.807 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:32.807 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:32.807 | 99.00th=[45876], 99.50th=[48497], 99.90th=[54789], 99.95th=[54789], 00:36:32.807 | 99.99th=[56886] 00:36:32.807 bw ( KiB/s): min= 1856, max= 2096, per=4.15%, avg=1960.53, stdev=71.89, samples=19 00:36:32.807 iops : min= 464, max= 524, avg=490.05, stdev=17.88, samples=19 00:36:32.807 lat (msec) : 20=0.86%, 50=98.94%, 100=0.20% 00:36:32.807 cpu : usr=98.89%, sys=0.80%, ctx=13, majf=0, minf=41 00:36:32.807 IO depths : 1=5.0%, 2=10.9%, 4=23.9%, 8=52.6%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:32.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 issued rwts: total=4910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.807 filename1: (groupid=0, jobs=1): err= 0: pid=949683: Wed Nov 6 14:00:54 2024 00:36:32.807 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10008msec) 00:36:32.807 slat (nsec): min=5619, max=90923, avg=18385.81, stdev=14114.91 00:36:32.807 clat (usec): min=9881, max=56868, avg=32401.43, stdev=4066.95 00:36:32.807 lat (usec): min=9887, max=56888, avg=32419.82, stdev=4067.43 00:36:32.807 clat percentiles (usec): 00:36:32.807 | 1.00th=[16909], 5.00th=[25035], 10.00th=[31589], 20.00th=[32113], 00:36:32.807 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:32.807 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34866], 00:36:32.807 | 99.00th=[48497], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:36:32.807 | 99.99th=[56886] 00:36:32.807 bw ( KiB/s): min= 1792, max= 2240, per=4.15%, avg=1959.47, stdev=97.51, samples=19 00:36:32.807 iops : min= 448, max= 560, avg=489.79, stdev=24.36, samples=19 00:36:32.807 lat (msec) : 10=0.04%, 20=1.81%, 50=97.19%, 100=0.96% 00:36:32.807 cpu : usr=98.96%, sys=0.74%, ctx=12, majf=0, minf=37 00:36:32.807 IO depths : 1=2.2%, 2=7.5%, 4=21.8%, 8=57.9%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:32.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.807 filename1: (groupid=0, jobs=1): err= 0: pid=949684: Wed Nov 6 14:00:54 2024 00:36:32.807 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10008msec) 00:36:32.807 slat (nsec): min=5891, max=91662, avg=27752.23, stdev=13732.40 00:36:32.807 clat (usec): min=11791, max=56824, avg=32450.39, stdev=2049.91 00:36:32.807 lat (usec): min=11801, max=56848, avg=32478.14, stdev=2050.07 00:36:32.807 clat percentiles (usec): 00:36:32.807 | 1.00th=[30278], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:32.807 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:32.807 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.807 | 99.00th=[34341], 99.50th=[34866], 99.90th=[56886], 99.95th=[56886], 00:36:32.807 | 99.99th=[56886] 00:36:32.807 bw ( KiB/s): min= 1792, max= 2052, per=4.12%, avg=1946.95, stdev=68.95, samples=19 00:36:32.807 iops : min= 448, max= 513, avg=486.74, stdev=17.24, samples=19 00:36:32.807 lat (msec) : 20=0.65%, 50=99.02%, 100=0.33% 00:36:32.807 cpu : usr=99.04%, sys=0.65%, ctx=13, majf=0, minf=32 00:36:32.807 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:32.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.807 filename1: (groupid=0, jobs=1): err= 0: pid=949685: Wed Nov 6 14:00:54 2024 00:36:32.807 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10016msec) 00:36:32.807 slat (nsec): min=5619, max=75683, avg=14165.85, stdev=11633.19 00:36:32.807 clat (usec): min=15782, max=39882, avg=32515.21, stdev=1534.22 00:36:32.807 lat (usec): min=15791, max=39890, avg=32529.37, stdev=1534.10 00:36:32.807 clat percentiles (usec): 00:36:32.807 | 1.00th=[26346], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:32.807 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.807 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:36:32.807 | 99.00th=[36439], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:36:32.807 | 99.99th=[40109] 00:36:32.807 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=1959.79, stdev=61.59, samples=19 00:36:32.807 iops : min= 479, max= 512, avg=489.95, stdev=15.40, samples=19 00:36:32.807 lat (msec) : 20=0.37%, 50=99.63% 00:36:32.807 cpu : usr=99.09%, sys=0.60%, ctx=16, majf=0, minf=35 00:36:32.807 IO depths : 1=4.6%, 2=10.8%, 4=24.9%, 8=51.8%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:32.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.807 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.807 filename1: (groupid=0, jobs=1): err= 0: pid=949686: Wed Nov 6 14:00:54 2024 00:36:32.808 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10021msec) 00:36:32.808 slat (nsec): min=5633, max=73448, avg=15396.61, stdev=10808.58 00:36:32.808 clat (usec): min=7832, max=36122, avg=32270.59, stdev=2478.04 00:36:32.808 lat (usec): min=7841, max=36130, avg=32285.99, stdev=2478.19 00:36:32.808 clat percentiles (usec): 00:36:32.808 | 1.00th=[18220], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:32.808 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.808 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.808 | 99.00th=[33817], 99.50th=[34866], 99.90th=[34866], 99.95th=[35914], 00:36:32.808 | 99.99th=[35914] 00:36:32.808 bw ( KiB/s): min= 1916, max= 2352, per=4.17%, avg=1972.80, stdev=105.84, samples=20 00:36:32.808 iops : min= 479, max= 588, avg=493.20, stdev=26.46, samples=20 00:36:32.808 lat (msec) : 10=0.28%, 20=1.33%, 50=98.38% 00:36:32.808 cpu : usr=99.07%, sys=0.60%, ctx=73, majf=0, minf=53 00:36:32.808 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:32.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.808 filename1: (groupid=0, jobs=1): err= 0: pid=949687: Wed Nov 6 14:00:54 2024 00:36:32.808 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10006msec) 00:36:32.808 slat (nsec): min=6103, max=69602, avg=24827.41, stdev=10968.46 00:36:32.808 clat (usec): min=20973, max=39575, avg=32485.16, stdev=836.34 00:36:32.808 lat (usec): min=20988, max=39587, avg=32509.99, stdev=836.76 00:36:32.808 clat percentiles (usec): 00:36:32.808 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:36:32.808 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:32.808 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.808 | 99.00th=[33817], 99.50th=[34341], 99.90th=[37487], 99.95th=[38536], 00:36:32.808 | 99.99th=[39584] 00:36:32.808 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1953.00, stdev=57.74, samples=19 00:36:32.808 iops : min= 479, max= 512, avg=488.21, stdev=14.37, samples=19 00:36:32.808 lat (msec) : 50=100.00% 00:36:32.808 cpu : usr=98.75%, sys=0.88%, ctx=60, majf=0, minf=37 00:36:32.808 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:32.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.808 filename1: (groupid=0, jobs=1): err= 0: pid=949688: Wed Nov 6 14:00:54 2024 00:36:32.808 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10018msec) 00:36:32.808 slat (nsec): min=5626, max=76418, avg=10765.00, stdev=8701.28 00:36:32.808 clat (usec): min=13021, max=51074, avg=32437.38, stdev=2057.56 00:36:32.808 lat (usec): min=13027, max=51080, avg=32448.14, stdev=2057.22 00:36:32.808 clat percentiles (usec): 00:36:32.808 | 1.00th=[21103], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:32.808 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.808 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.808 | 99.00th=[33817], 99.50th=[34866], 99.90th=[49546], 99.95th=[50070], 00:36:32.808 | 99.99th=[51119] 00:36:32.808 bw ( KiB/s): min= 1916, max= 2048, per=4.16%, avg=1966.47, stdev=61.84, samples=19 00:36:32.808 iops : min= 479, max= 512, avg=491.58, stdev=15.41, samples=19 00:36:32.808 lat (msec) : 20=0.81%, 50=99.15%, 100=0.04% 00:36:32.808 cpu : usr=99.18%, sys=0.52%, ctx=12, majf=0, minf=31 00:36:32.808 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:32.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.808 filename1: (groupid=0, jobs=1): err= 0: pid=949689: Wed Nov 6 14:00:54 2024 00:36:32.808 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10009msec) 00:36:32.808 slat (nsec): min=5482, max=95763, avg=17050.45, stdev=14337.58 00:36:32.808 clat (usec): min=9602, max=57494, avg=32601.74, stdev=3348.97 00:36:32.808 lat (usec): min=9608, max=57511, avg=32618.79, stdev=3349.13 00:36:32.808 clat percentiles (usec): 00:36:32.808 | 1.00th=[19006], 5.00th=[31327], 10.00th=[32113], 20.00th=[32113], 00:36:32.808 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.808 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:36:32.808 | 99.00th=[49021], 99.50th=[50070], 99.90th=[57410], 99.95th=[57410], 00:36:32.808 | 99.99th=[57410] 00:36:32.808 bw ( KiB/s): min= 1715, max= 2096, per=4.12%, avg=1948.79, stdev=73.95, samples=19 00:36:32.808 iops : min= 428, max= 524, avg=487.16, stdev=18.62, samples=19 00:36:32.808 lat (msec) : 10=0.12%, 20=0.94%, 50=98.12%, 100=0.82% 00:36:32.808 cpu : usr=98.93%, sys=0.63%, ctx=101, majf=0, minf=36 00:36:32.808 IO depths : 1=2.1%, 2=4.3%, 4=9.1%, 8=70.4%, 16=14.1%, 32=0.0%, >=64=0.0% 00:36:32.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 complete : 0=0.0%, 4=90.9%, 8=6.9%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 issued rwts: total=4894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.808 filename2: (groupid=0, jobs=1): err= 0: pid=949690: Wed Nov 6 14:00:54 2024 00:36:32.808 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:36:32.808 slat (nsec): min=5381, max=98602, avg=31395.60, stdev=16519.64 00:36:32.808 clat (usec): min=11955, max=65358, avg=32415.02, stdev=2208.15 00:36:32.808 lat (usec): min=11963, max=65373, avg=32446.42, stdev=2208.22 00:36:32.808 clat percentiles (usec): 00:36:32.808 | 1.00th=[28705], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:32.808 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:32.808 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.808 | 99.00th=[34866], 99.50th=[37487], 99.90th=[58983], 99.95th=[58983], 00:36:32.808 | 99.99th=[65274] 00:36:32.808 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1946.47, stdev=68.21, samples=19 00:36:32.808 iops : min= 448, max= 512, avg=486.58, stdev=16.99, samples=19 00:36:32.808 lat (msec) : 20=0.65%, 50=99.02%, 100=0.33% 00:36:32.808 cpu : usr=98.95%, sys=0.75%, ctx=11, majf=0, minf=27 00:36:32.808 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:32.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.808 filename2: (groupid=0, jobs=1): err= 0: pid=949691: Wed Nov 6 14:00:54 2024 00:36:32.808 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:36:32.808 slat (nsec): min=5411, max=96385, avg=31322.04, stdev=16728.99 00:36:32.808 clat (usec): min=12026, max=58436, avg=32430.37, stdev=2109.94 00:36:32.808 lat (usec): min=12032, max=58452, avg=32461.69, stdev=2109.82 00:36:32.808 clat percentiles (usec): 00:36:32.808 | 1.00th=[30540], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:32.808 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:32.808 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.808 | 99.00th=[33817], 99.50th=[34866], 99.90th=[58459], 99.95th=[58459], 00:36:32.808 | 99.99th=[58459] 00:36:32.808 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1946.47, stdev=68.21, samples=19 00:36:32.808 iops : min= 448, max= 512, avg=486.58, stdev=16.99, samples=19 00:36:32.808 lat (msec) : 20=0.65%, 50=99.02%, 100=0.33% 00:36:32.808 cpu : usr=98.89%, sys=0.80%, ctx=19, majf=0, minf=50 00:36:32.808 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:32.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.808 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.808 filename2: (groupid=0, jobs=1): err= 0: pid=949692: Wed Nov 6 14:00:54 2024 00:36:32.808 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:36:32.808 slat (nsec): min=5619, max=95734, avg=27218.11, stdev=14463.16 00:36:32.808 clat (usec): min=11944, max=58219, avg=32450.78, stdev=2101.74 00:36:32.808 lat (usec): min=11950, max=58236, avg=32478.00, stdev=2101.78 00:36:32.808 clat percentiles (usec): 00:36:32.808 | 1.00th=[30540], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:32.808 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:32.808 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.808 | 99.00th=[34341], 99.50th=[34866], 99.90th=[57934], 99.95th=[58459], 00:36:32.808 | 99.99th=[58459] 00:36:32.808 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1946.47, stdev=68.21, samples=19 00:36:32.809 iops : min= 448, max= 512, avg=486.58, stdev=16.99, samples=19 00:36:32.809 lat (msec) : 20=0.65%, 50=99.02%, 100=0.33% 00:36:32.809 cpu : usr=98.96%, sys=0.73%, ctx=10, majf=0, minf=30 00:36:32.809 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:32.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.809 filename2: (groupid=0, jobs=1): err= 0: pid=949693: Wed Nov 6 14:00:54 2024 00:36:32.809 read: IOPS=492, BW=1972KiB/s (2019kB/s)(19.3MiB/10013msec) 00:36:32.809 slat (nsec): min=5625, max=84020, avg=10794.71, stdev=9574.62 00:36:32.809 clat (usec): min=12460, max=55685, avg=32369.25, stdev=2755.77 00:36:32.809 lat (usec): min=12468, max=55705, avg=32380.05, stdev=2756.24 00:36:32.809 clat percentiles (usec): 00:36:32.809 | 1.00th=[19530], 5.00th=[28705], 10.00th=[32113], 20.00th=[32375], 00:36:32.809 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.809 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:36:32.809 | 99.00th=[40109], 99.50th=[41157], 99.90th=[55313], 99.95th=[55313], 00:36:32.809 | 99.99th=[55837] 00:36:32.809 bw ( KiB/s): min= 1916, max= 2107, per=4.17%, avg=1969.53, stdev=66.62, samples=19 00:36:32.809 iops : min= 479, max= 526, avg=492.26, stdev=16.50, samples=19 00:36:32.809 lat (msec) : 20=1.03%, 50=98.60%, 100=0.36% 00:36:32.809 cpu : usr=98.72%, sys=0.87%, ctx=120, majf=0, minf=80 00:36:32.809 IO depths : 1=4.2%, 2=10.1%, 4=23.9%, 8=53.4%, 16=8.3%, 32=0.0%, >=64=0.0% 00:36:32.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 issued rwts: total=4936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.809 filename2: (groupid=0, jobs=1): err= 0: pid=949694: Wed Nov 6 14:00:54 2024 00:36:32.809 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10010msec) 00:36:32.809 slat (nsec): min=5648, max=72068, avg=17960.43, stdev=12192.53 00:36:32.809 clat (usec): min=21630, max=40217, avg=32565.25, stdev=981.85 00:36:32.809 lat (usec): min=21640, max=40248, avg=32583.21, stdev=981.73 00:36:32.809 clat percentiles (usec): 00:36:32.809 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:32.809 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:32.809 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:32.809 | 99.00th=[34341], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:36:32.809 | 99.99th=[40109] 00:36:32.809 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1953.00, stdev=57.74, samples=19 00:36:32.809 iops : min= 479, max= 512, avg=488.21, stdev=14.37, samples=19 00:36:32.809 lat (msec) : 50=100.00% 00:36:32.809 cpu : usr=98.83%, sys=0.87%, ctx=13, majf=0, minf=39 00:36:32.809 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:32.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.809 filename2: (groupid=0, jobs=1): err= 0: pid=949695: Wed Nov 6 14:00:54 2024 00:36:32.809 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10013msec) 00:36:32.809 slat (nsec): min=5619, max=54684, avg=10308.65, stdev=7056.54 00:36:32.809 clat (usec): min=20563, max=44316, avg=32636.59, stdev=1741.42 00:36:32.809 lat (usec): min=20568, max=44322, avg=32646.90, stdev=1741.19 00:36:32.809 clat percentiles (usec): 00:36:32.809 | 1.00th=[23200], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:32.809 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:32.809 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:36:32.809 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43779], 99.95th=[44303], 00:36:32.809 | 99.99th=[44303] 00:36:32.809 bw ( KiB/s): min= 1904, max= 2048, per=4.13%, avg=1953.00, stdev=57.98, samples=19 00:36:32.809 iops : min= 476, max= 512, avg=488.21, stdev=14.43, samples=19 00:36:32.809 lat (msec) : 50=100.00% 00:36:32.809 cpu : usr=98.68%, sys=0.85%, ctx=66, majf=0, minf=38 00:36:32.809 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:36:32.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.809 filename2: (groupid=0, jobs=1): err= 0: pid=949696: Wed Nov 6 14:00:54 2024 00:36:32.809 read: IOPS=497, BW=1989KiB/s (2036kB/s)(19.5MiB/10025msec) 00:36:32.809 slat (nsec): min=5610, max=91833, avg=20291.23, stdev=16567.85 00:36:32.809 clat (usec): min=11399, max=61911, avg=32021.43, stdev=4057.97 00:36:32.809 lat (usec): min=11416, max=61917, avg=32041.72, stdev=4059.33 00:36:32.809 clat percentiles (usec): 00:36:32.809 | 1.00th=[18220], 5.00th=[23462], 10.00th=[28705], 20.00th=[32113], 00:36:32.809 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:32.809 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[35390], 00:36:32.809 | 99.00th=[45876], 99.50th=[47449], 99.90th=[62129], 99.95th=[62129], 00:36:32.809 | 99.99th=[62129] 00:36:32.809 bw ( KiB/s): min= 1792, max= 2180, per=4.20%, avg=1986.60, stdev=99.02, samples=20 00:36:32.809 iops : min= 448, max= 545, avg=496.65, stdev=24.76, samples=20 00:36:32.809 lat (msec) : 20=2.01%, 50=97.75%, 100=0.24% 00:36:32.809 cpu : usr=98.46%, sys=1.02%, ctx=132, majf=0, minf=41 00:36:32.809 IO depths : 1=4.1%, 2=8.3%, 4=18.8%, 8=60.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:32.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 complete : 0=0.0%, 4=92.4%, 8=1.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 issued rwts: total=4984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.809 filename2: (groupid=0, jobs=1): err= 0: pid=949697: Wed Nov 6 14:00:54 2024 00:36:32.809 read: IOPS=486, BW=1946KiB/s (1993kB/s)(19.0MiB/10009msec) 00:36:32.809 slat (nsec): min=5505, max=91171, avg=26021.89, stdev=17205.03 00:36:32.809 clat (usec): min=6648, max=57990, avg=32632.95, stdev=3495.62 00:36:32.809 lat (usec): min=6654, max=58009, avg=32658.97, stdev=3495.61 00:36:32.809 clat percentiles (usec): 00:36:32.809 | 1.00th=[22676], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:32.809 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:32.809 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[34341], 00:36:32.809 | 99.00th=[48497], 99.50th=[57410], 99.90th=[57934], 99.95th=[57934], 00:36:32.809 | 99.99th=[57934] 00:36:32.809 bw ( KiB/s): min= 1760, max= 2048, per=4.10%, avg=1936.53, stdev=83.47, samples=19 00:36:32.809 iops : min= 440, max= 512, avg=484.05, stdev=20.89, samples=19 00:36:32.809 lat (msec) : 10=0.37%, 20=0.45%, 50=98.40%, 100=0.78% 00:36:32.809 cpu : usr=98.31%, sys=1.05%, ctx=121, majf=0, minf=23 00:36:32.809 IO depths : 1=5.2%, 2=10.9%, 4=23.8%, 8=52.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:32.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.809 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.809 00:36:32.809 Run status group 0 (all jobs): 00:36:32.809 READ: bw=46.2MiB/s (48.4MB/s), 1946KiB/s-2096KiB/s (1993kB/s-2147kB/s), io=463MiB (485MB), run=10004-10025msec 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.809 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.810 bdev_null0 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.810 [2024-11-06 14:00:54.703482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.810 bdev_null1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.810 { 00:36:32.810 "params": { 00:36:32.810 "name": "Nvme$subsystem", 00:36:32.810 "trtype": "$TEST_TRANSPORT", 00:36:32.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.810 "adrfam": "ipv4", 00:36:32.810 "trsvcid": "$NVMF_PORT", 00:36:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.810 "hdgst": ${hdgst:-false}, 00:36:32.810 "ddgst": ${ddgst:-false} 00:36:32.810 }, 00:36:32.810 "method": "bdev_nvme_attach_controller" 00:36:32.810 } 00:36:32.810 EOF 00:36:32.810 )") 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.810 { 00:36:32.810 "params": { 00:36:32.810 "name": "Nvme$subsystem", 00:36:32.810 "trtype": "$TEST_TRANSPORT", 00:36:32.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.810 "adrfam": "ipv4", 00:36:32.810 "trsvcid": "$NVMF_PORT", 00:36:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.810 "hdgst": ${hdgst:-false}, 00:36:32.810 "ddgst": ${ddgst:-false} 00:36:32.810 }, 00:36:32.810 "method": "bdev_nvme_attach_controller" 00:36:32.810 } 00:36:32.810 EOF 00:36:32.810 )") 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:32.810 14:00:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:32.810 "params": { 00:36:32.810 "name": "Nvme0", 00:36:32.810 "trtype": "tcp", 00:36:32.810 "traddr": "10.0.0.2", 00:36:32.810 "adrfam": "ipv4", 00:36:32.811 "trsvcid": "4420", 00:36:32.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.811 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:32.811 "hdgst": false, 00:36:32.811 "ddgst": false 00:36:32.811 }, 00:36:32.811 "method": "bdev_nvme_attach_controller" 00:36:32.811 },{ 00:36:32.811 "params": { 00:36:32.811 "name": "Nvme1", 00:36:32.811 "trtype": "tcp", 00:36:32.811 "traddr": "10.0.0.2", 00:36:32.811 "adrfam": "ipv4", 00:36:32.811 "trsvcid": "4420", 00:36:32.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:32.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:32.811 "hdgst": false, 00:36:32.811 "ddgst": false 00:36:32.811 }, 00:36:32.811 "method": "bdev_nvme_attach_controller" 00:36:32.811 }' 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:32.811 14:00:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.811 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:32.811 ... 00:36:32.811 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:32.811 ... 00:36:32.811 fio-3.35 00:36:32.811 Starting 4 threads 00:36:38.104 00:36:38.104 filename0: (groupid=0, jobs=1): err= 0: pid=952189: Wed Nov 6 14:01:00 2024 00:36:38.104 read: IOPS=2093, BW=16.4MiB/s (17.1MB/s)(81.8MiB/5003msec) 00:36:38.104 slat (usec): min=5, max=509, avg= 6.32, stdev= 5.51 00:36:38.104 clat (usec): min=1704, max=6499, avg=3803.49, stdev=709.06 00:36:38.104 lat (usec): min=1725, max=6505, avg=3809.81, stdev=709.08 00:36:38.104 clat percentiles (usec): 00:36:38.104 | 1.00th=[ 2638], 5.00th=[ 2933], 10.00th=[ 3130], 20.00th=[ 3359], 00:36:38.104 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3654], 60.00th=[ 3720], 00:36:38.104 | 70.00th=[ 3818], 80.00th=[ 4047], 90.00th=[ 5211], 95.00th=[ 5473], 00:36:38.104 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6194], 99.95th=[ 6325], 00:36:38.104 | 99.99th=[ 6521] 00:36:38.104 bw ( KiB/s): min=16352, max=17056, per=25.06%, avg=16745.60, stdev=203.26, samples=10 00:36:38.104 iops : min= 2044, max= 2132, avg=2093.20, stdev=25.41, samples=10 00:36:38.104 lat (msec) : 2=0.06%, 4=77.66%, 10=22.28% 00:36:38.104 cpu : usr=96.72%, sys=3.04%, ctx=7, majf=0, minf=9 00:36:38.104 IO depths : 1=0.1%, 2=0.1%, 4=72.2%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:38.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.104 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.104 issued rwts: total=10474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:38.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:38.104 filename0: (groupid=0, jobs=1): err= 0: pid=952190: Wed Nov 6 14:01:00 2024 00:36:38.104 read: IOPS=2166, BW=16.9MiB/s (17.7MB/s)(84.6MiB/5002msec) 00:36:38.104 slat (usec): min=7, max=506, avg= 9.19, stdev= 5.42 00:36:38.104 clat (usec): min=918, max=7253, avg=3668.92, stdev=611.35 00:36:38.104 lat (usec): min=929, max=7268, avg=3678.11, stdev=611.35 00:36:38.104 clat percentiles (usec): 00:36:38.104 | 1.00th=[ 2573], 5.00th=[ 2835], 10.00th=[ 2999], 20.00th=[ 3228], 00:36:38.104 | 30.00th=[ 3359], 40.00th=[ 3458], 50.00th=[ 3556], 60.00th=[ 3654], 00:36:38.104 | 70.00th=[ 3785], 80.00th=[ 4080], 90.00th=[ 4621], 95.00th=[ 4883], 00:36:38.104 | 99.00th=[ 5407], 99.50th=[ 5669], 99.90th=[ 6063], 99.95th=[ 6128], 00:36:38.104 | 99.99th=[ 6325] 00:36:38.104 bw ( KiB/s): min=16176, max=17936, per=25.94%, avg=17328.00, stdev=617.75, samples=10 00:36:38.104 iops : min= 2022, max= 2242, avg=2166.00, stdev=77.22, samples=10 00:36:38.104 lat (usec) : 1000=0.01% 00:36:38.104 lat (msec) : 2=0.01%, 4=78.13%, 10=21.86% 00:36:38.104 cpu : usr=96.26%, sys=3.50%, ctx=10, majf=0, minf=11 00:36:38.104 IO depths : 1=0.1%, 2=0.5%, 4=70.2%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:38.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.104 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.104 issued rwts: total=10835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:38.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:38.104 filename1: (groupid=0, jobs=1): err= 0: pid=952191: Wed Nov 6 14:01:00 2024 00:36:38.104 read: IOPS=2050, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5002msec) 00:36:38.104 slat (usec): min=5, max=507, avg= 6.39, stdev= 5.48 00:36:38.104 clat (usec): min=2037, max=7096, avg=3883.25, stdev=705.98 00:36:38.104 lat (usec): min=2042, max=7109, avg=3889.64, stdev=706.01 00:36:38.104 clat percentiles (usec): 00:36:38.104 | 1.00th=[ 2769], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3425], 00:36:38.104 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3785], 00:36:38.104 | 70.00th=[ 3916], 80.00th=[ 4113], 90.00th=[ 5211], 95.00th=[ 5538], 00:36:38.104 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6390], 99.95th=[ 6390], 00:36:38.104 | 99.99th=[ 7111] 00:36:38.104 bw ( KiB/s): min=16032, max=16976, per=24.55%, avg=16400.10, stdev=298.39, samples=10 00:36:38.104 iops : min= 2004, max= 2122, avg=2050.00, stdev=37.29, samples=10 00:36:38.104 lat (msec) : 4=74.01%, 10=25.99% 00:36:38.104 cpu : usr=96.64%, sys=3.14%, ctx=9, majf=0, minf=9 00:36:38.104 IO depths : 1=0.1%, 2=0.2%, 4=72.7%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:38.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.104 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.104 issued rwts: total=10259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:38.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:38.104 filename1: (groupid=0, jobs=1): err= 0: pid=952192: Wed Nov 6 14:01:00 2024 00:36:38.104 read: IOPS=2042, BW=16.0MiB/s (16.7MB/s)(79.8MiB/5002msec) 00:36:38.104 slat (nsec): min=5440, max=75998, avg=6274.06, stdev=2440.06 00:36:38.104 clat (usec): min=1504, max=6663, avg=3899.70, stdev=708.09 00:36:38.104 lat (usec): min=1510, max=6669, avg=3905.97, stdev=708.06 00:36:38.104 clat percentiles (usec): 00:36:38.104 | 1.00th=[ 2835], 5.00th=[ 3195], 10.00th=[ 3294], 20.00th=[ 3458], 00:36:38.104 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3720], 60.00th=[ 3785], 00:36:38.104 | 70.00th=[ 3884], 80.00th=[ 4113], 90.00th=[ 5211], 95.00th=[ 5473], 00:36:38.104 | 99.00th=[ 5997], 99.50th=[ 6128], 99.90th=[ 6325], 99.95th=[ 6587], 00:36:38.104 | 99.99th=[ 6652] 00:36:38.104 bw ( KiB/s): min=16128, max=16576, per=24.44%, avg=16331.20, stdev=153.10, samples=10 00:36:38.104 iops : min= 2016, max= 2072, avg=2041.40, stdev=19.14, samples=10 00:36:38.104 lat (msec) : 2=0.05%, 4=73.82%, 10=26.13% 00:36:38.104 cpu : usr=96.62%, sys=3.14%, ctx=6, majf=0, minf=9 00:36:38.104 IO depths : 1=0.1%, 2=0.1%, 4=72.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:38.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.104 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:38.104 issued rwts: total=10215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:38.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:38.104 00:36:38.104 Run status group 0 (all jobs): 00:36:38.104 READ: bw=65.2MiB/s (68.4MB/s), 16.0MiB/s-16.9MiB/s (16.7MB/s-17.7MB/s), io=326MiB (342MB), run=5002-5003msec 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.104 00:36:38.104 real 0m24.520s 00:36:38.104 user 5m16.914s 00:36:38.104 sys 0m4.287s 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:38.104 14:01:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.104 ************************************ 00:36:38.104 END TEST fio_dif_rand_params 00:36:38.104 ************************************ 00:36:38.104 14:01:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:38.104 14:01:01 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:38.104 14:01:01 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:38.104 14:01:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:38.104 ************************************ 00:36:38.104 START TEST fio_dif_digest 00:36:38.104 ************************************ 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.104 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.105 bdev_null0 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.105 [2024-11-06 14:01:01.233119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:38.105 { 00:36:38.105 "params": { 00:36:38.105 "name": "Nvme$subsystem", 00:36:38.105 "trtype": "$TEST_TRANSPORT", 00:36:38.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:38.105 "adrfam": "ipv4", 00:36:38.105 "trsvcid": "$NVMF_PORT", 00:36:38.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:38.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:38.105 "hdgst": ${hdgst:-false}, 00:36:38.105 "ddgst": ${ddgst:-false} 00:36:38.105 }, 00:36:38.105 "method": "bdev_nvme_attach_controller" 00:36:38.105 } 00:36:38.105 EOF 00:36:38.105 )") 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:38.105 "params": { 00:36:38.105 "name": "Nvme0", 00:36:38.105 "trtype": "tcp", 00:36:38.105 "traddr": "10.0.0.2", 00:36:38.105 "adrfam": "ipv4", 00:36:38.105 "trsvcid": "4420", 00:36:38.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:38.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:38.105 "hdgst": true, 00:36:38.105 "ddgst": true 00:36:38.105 }, 00:36:38.105 "method": "bdev_nvme_attach_controller" 00:36:38.105 }' 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:38.105 14:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.365 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:38.365 ... 00:36:38.365 fio-3.35 00:36:38.365 Starting 3 threads 00:36:50.602 00:36:50.602 filename0: (groupid=0, jobs=1): err= 0: pid=953391: Wed Nov 6 14:01:12 2024 00:36:50.602 read: IOPS=231, BW=28.9MiB/s (30.4MB/s)(291MiB/10048msec) 00:36:50.602 slat (nsec): min=5826, max=61321, avg=7088.18, stdev=1833.17 00:36:50.602 clat (usec): min=7354, max=52789, avg=12924.94, stdev=1622.96 00:36:50.602 lat (usec): min=7361, max=52796, avg=12932.03, stdev=1623.01 00:36:50.602 clat percentiles (usec): 00:36:50.602 | 1.00th=[ 9110], 5.00th=[10945], 10.00th=[11600], 20.00th=[12125], 00:36:50.602 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:36:50.602 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:36:50.602 | 99.00th=[15533], 99.50th=[15795], 99.90th=[17433], 99.95th=[49546], 00:36:50.602 | 99.99th=[52691] 00:36:50.602 bw ( KiB/s): min=28416, max=31744, per=35.11%, avg=29760.00, stdev=813.26, samples=20 00:36:50.602 iops : min= 222, max= 248, avg=232.50, stdev= 6.35, samples=20 00:36:50.602 lat (msec) : 10=2.41%, 20=97.51%, 50=0.04%, 100=0.04% 00:36:50.602 cpu : usr=94.29%, sys=5.44%, ctx=21, majf=0, minf=186 00:36:50.602 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.602 issued rwts: total=2327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.602 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:50.602 filename0: (groupid=0, jobs=1): err= 0: pid=953392: Wed Nov 6 14:01:12 2024 00:36:50.602 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(277MiB/10047msec) 00:36:50.602 slat (nsec): min=5864, max=35866, avg=7401.10, stdev=1569.45 00:36:50.602 clat (usec): min=7970, max=55293, avg=13553.99, stdev=2654.44 00:36:50.602 lat (usec): min=7976, max=55303, avg=13561.39, stdev=2654.54 00:36:50.602 clat percentiles (usec): 00:36:50.602 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:36:50.602 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:36:50.602 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:36:50.602 | 99.00th=[16188], 99.50th=[16712], 99.90th=[54789], 99.95th=[54789], 00:36:50.602 | 99.99th=[55313] 00:36:50.602 bw ( KiB/s): min=26112, max=30464, per=33.48%, avg=28377.60, stdev=980.20, samples=20 00:36:50.602 iops : min= 204, max= 238, avg=221.70, stdev= 7.66, samples=20 00:36:50.602 lat (msec) : 10=1.31%, 20=98.33%, 50=0.05%, 100=0.32% 00:36:50.602 cpu : usr=94.45%, sys=5.29%, ctx=21, majf=0, minf=127 00:36:50.602 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.602 issued rwts: total=2219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.602 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:50.602 filename0: (groupid=0, jobs=1): err= 0: pid=953393: Wed Nov 6 14:01:12 2024 00:36:50.602 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10047msec) 00:36:50.602 slat (nsec): min=5809, max=30500, avg=7040.49, stdev=1405.02 00:36:50.602 clat (usec): min=9298, max=55766, avg=14274.52, stdev=3446.91 00:36:50.602 lat (usec): min=9304, max=55772, avg=14281.56, stdev=3446.90 00:36:50.602 clat percentiles (usec): 00:36:50.602 | 1.00th=[10814], 5.00th=[12125], 10.00th=[12518], 20.00th=[13042], 00:36:50.602 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14353], 00:36:50.602 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15664], 95.00th=[16057], 00:36:50.602 | 99.00th=[17695], 99.50th=[53216], 99.90th=[55313], 99.95th=[55837], 00:36:50.602 | 99.99th=[55837] 00:36:50.602 bw ( KiB/s): min=22528, max=28416, per=31.79%, avg=26944.00, stdev=1372.02, samples=20 00:36:50.602 iops : min= 176, max= 222, avg=210.50, stdev=10.72, samples=20 00:36:50.602 lat (msec) : 10=0.52%, 20=98.77%, 50=0.09%, 100=0.62% 00:36:50.602 cpu : usr=94.59%, sys=5.14%, ctx=23, majf=0, minf=146 00:36:50.602 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.602 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.602 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:50.602 00:36:50.602 Run status group 0 (all jobs): 00:36:50.602 READ: bw=82.8MiB/s (86.8MB/s), 26.2MiB/s-28.9MiB/s (27.5MB/s-30.4MB/s), io=832MiB (872MB), run=10047-10048msec 00:36:50.602 14:01:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:50.602 14:01:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:50.602 14:01:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:50.602 14:01:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:50.602 14:01:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:50.602 14:01:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:50.602 14:01:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.602 14:01:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.603 14:01:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.603 14:01:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:50.603 14:01:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.603 14:01:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.603 14:01:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.603 00:36:50.603 real 0m11.207s 00:36:50.603 user 0m46.196s 00:36:50.603 sys 0m1.899s 00:36:50.603 14:01:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:50.603 14:01:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.603 ************************************ 00:36:50.603 END TEST fio_dif_digest 00:36:50.603 ************************************ 00:36:50.603 14:01:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:50.603 14:01:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.603 rmmod nvme_tcp 00:36:50.603 rmmod nvme_fabrics 00:36:50.603 rmmod nvme_keyring 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 943239 ']' 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 943239 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 943239 ']' 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 943239 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 943239 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 943239' 00:36:50.603 killing process with pid 943239 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@971 -- # kill 943239 00:36:50.603 14:01:12 nvmf_dif -- common/autotest_common.sh@976 -- # wait 943239 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:50.603 14:01:12 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:52.518 Waiting for block devices as requested 00:36:52.518 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:52.518 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:52.518 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:52.779 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:52.779 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:52.779 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:53.039 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:53.039 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:53.039 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:53.300 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:53.300 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:53.300 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:53.561 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:53.561 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:53.561 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:53.561 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:53.822 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:54.083 14:01:17 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:54.083 14:01:17 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:54.083 14:01:17 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:54.083 14:01:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:54.083 14:01:17 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:54.083 14:01:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:54.083 14:01:17 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:54.083 14:01:17 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:54.083 14:01:17 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.083 14:01:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:54.083 14:01:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.999 14:01:19 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:55.999 00:36:55.999 real 1m16.679s 00:36:55.999 user 8m3.313s 00:36:55.999 sys 0m20.748s 00:36:55.999 14:01:19 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:55.999 14:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:55.999 ************************************ 00:36:55.999 END TEST nvmf_dif 00:36:55.999 ************************************ 00:36:56.260 14:01:19 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:56.260 14:01:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:56.260 14:01:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:56.260 14:01:19 -- common/autotest_common.sh@10 -- # set +x 00:36:56.260 ************************************ 00:36:56.260 START TEST nvmf_abort_qd_sizes 00:36:56.260 ************************************ 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:56.260 * Looking for test storage... 00:36:56.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:56.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.260 --rc genhtml_branch_coverage=1 00:36:56.260 --rc genhtml_function_coverage=1 00:36:56.260 --rc genhtml_legend=1 00:36:56.260 --rc geninfo_all_blocks=1 00:36:56.260 --rc geninfo_unexecuted_blocks=1 00:36:56.260 00:36:56.260 ' 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:56.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.260 --rc genhtml_branch_coverage=1 00:36:56.260 --rc genhtml_function_coverage=1 00:36:56.260 --rc genhtml_legend=1 00:36:56.260 --rc geninfo_all_blocks=1 00:36:56.260 --rc geninfo_unexecuted_blocks=1 00:36:56.260 00:36:56.260 ' 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:56.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.260 --rc genhtml_branch_coverage=1 00:36:56.260 --rc genhtml_function_coverage=1 00:36:56.260 --rc genhtml_legend=1 00:36:56.260 --rc geninfo_all_blocks=1 00:36:56.260 --rc geninfo_unexecuted_blocks=1 00:36:56.260 00:36:56.260 ' 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:56.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.260 --rc genhtml_branch_coverage=1 00:36:56.260 --rc genhtml_function_coverage=1 00:36:56.260 --rc genhtml_legend=1 00:36:56.260 --rc geninfo_all_blocks=1 00:36:56.260 --rc geninfo_unexecuted_blocks=1 00:36:56.260 00:36:56.260 ' 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:56.260 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:56.520 14:01:19 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:56.521 14:01:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:03.112 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:03.112 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:03.373 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:03.373 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:03.373 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:03.373 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:03.634 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:03.634 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:03.634 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:03.634 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:03.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:03.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:37:03.634 00:37:03.634 --- 10.0.0.2 ping statistics --- 00:37:03.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:03.634 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:37:03.634 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:03.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:03.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:37:03.634 00:37:03.634 --- 10.0.0.1 ping statistics --- 00:37:03.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:03.634 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:37:03.634 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:03.634 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:03.634 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:03.634 14:01:26 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:06.279 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:06.279 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:06.279 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:06.279 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:06.279 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:06.279 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:06.279 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:06.279 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:06.539 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:06.539 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:06.539 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:06.539 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:06.539 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:06.539 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:06.539 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:06.539 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:06.539 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:06.800 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:06.800 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:06.800 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:06.800 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:06.800 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:06.800 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=962809 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 962809 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 962809 ']' 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:07.061 14:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:07.061 [2024-11-06 14:01:30.260117] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:37:07.061 [2024-11-06 14:01:30.260166] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.061 [2024-11-06 14:01:30.336142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:07.061 [2024-11-06 14:01:30.373793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.061 [2024-11-06 14:01:30.373827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.061 [2024-11-06 14:01:30.373835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.061 [2024-11-06 14:01:30.373842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.061 [2024-11-06 14:01:30.373848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.061 [2024-11-06 14:01:30.375345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.061 [2024-11-06 14:01:30.375445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:07.061 [2024-11-06 14:01:30.375601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.061 [2024-11-06 14:01:30.375602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:08.002 14:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:08.002 ************************************ 00:37:08.002 START TEST spdk_target_abort 00:37:08.002 ************************************ 00:37:08.002 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:37:08.002 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:08.002 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:08.002 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.002 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.264 spdk_targetn1 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.264 [2024-11-06 14:01:31.464762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.264 [2024-11-06 14:01:31.521058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.264 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:08.265 14:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:08.526 [2024-11-06 14:01:31.820170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:840 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:08.526 [2024-11-06 14:01:31.820199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:37:08.526 [2024-11-06 14:01:31.836453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1448 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:08.526 [2024-11-06 14:01:31.836474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b7 p:1 m:0 dnr:0 00:37:08.526 [2024-11-06 14:01:31.869378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2648 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:08.526 [2024-11-06 14:01:31.869395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:08.526 [2024-11-06 14:01:31.891160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3384 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:08.526 [2024-11-06 14:01:31.891176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00aa p:0 m:0 dnr:0 00:37:11.828 Initializing NVMe Controllers 00:37:11.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:11.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:11.828 Initialization complete. Launching workers. 00:37:11.828 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13067, failed: 4 00:37:11.828 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3082, failed to submit 9989 00:37:11.828 success 727, unsuccessful 2355, failed 0 00:37:11.828 14:01:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:11.828 14:01:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:11.828 [2024-11-06 14:01:35.122001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:704 len:8 PRP1 0x200004e3c000 PRP2 0x0 00:37:11.828 [2024-11-06 14:01:35.122042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0064 p:1 m:0 dnr:0 00:37:11.828 [2024-11-06 14:01:35.153785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:1456 len:8 PRP1 0x200004e40000 PRP2 0x0 00:37:11.828 [2024-11-06 14:01:35.153810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:00b7 p:1 m:0 dnr:0 00:37:11.828 [2024-11-06 14:01:35.176969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:2096 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:37:11.828 [2024-11-06 14:01:35.176993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:11.828 [2024-11-06 14:01:35.192901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:2448 len:8 PRP1 0x200004e46000 PRP2 0x0 00:37:11.828 [2024-11-06 14:01:35.192923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:12.089 [2024-11-06 14:01:35.216898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:2992 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:37:12.089 [2024-11-06 14:01:35.216920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:12.089 [2024-11-06 14:01:35.232274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:3456 len:8 PRP1 0x200004e56000 PRP2 0x0 00:37:12.089 [2024-11-06 14:01:35.232296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:00b5 p:0 m:0 dnr:0 00:37:13.475 [2024-11-06 14:01:36.751970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:39240 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:13.475 [2024-11-06 14:01:36.752009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:15.388 Initializing NVMe Controllers 00:37:15.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:15.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:15.388 Initialization complete. Launching workers. 00:37:15.388 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8815, failed: 7 00:37:15.388 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7588 00:37:15.388 success 335, unsuccessful 899, failed 0 00:37:15.388 14:01:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:15.388 14:01:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:18.685 Initializing NVMe Controllers 00:37:18.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:18.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:18.685 Initialization complete. Launching workers. 00:37:18.685 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42213, failed: 0 00:37:18.685 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2720, failed to submit 39493 00:37:18.685 success 598, unsuccessful 2122, failed 0 00:37:18.685 14:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:18.685 14:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.685 14:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:18.685 14:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.685 14:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:18.685 14:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.685 14:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:20.070 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.070 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 962809 00:37:20.070 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 962809 ']' 00:37:20.070 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 962809 00:37:20.070 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:37:20.070 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:20.070 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 962809 00:37:20.330 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:20.330 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:20.330 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 962809' 00:37:20.330 killing process with pid 962809 00:37:20.330 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 962809 00:37:20.330 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 962809 00:37:20.330 00:37:20.330 real 0m12.461s 00:37:20.330 user 0m50.905s 00:37:20.330 sys 0m1.856s 00:37:20.330 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:20.330 14:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:20.330 ************************************ 00:37:20.330 END TEST spdk_target_abort 00:37:20.330 ************************************ 00:37:20.330 14:01:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:20.330 14:01:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:20.330 14:01:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:20.330 14:01:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:20.330 ************************************ 00:37:20.330 START TEST kernel_target_abort 00:37:20.330 ************************************ 00:37:20.330 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:20.331 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:20.591 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:20.592 14:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:23.895 Waiting for block devices as requested 00:37:23.895 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:23.895 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:23.895 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:23.895 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:23.895 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:23.895 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:24.155 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:24.155 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:24.155 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:24.415 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:24.415 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:24.415 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:24.676 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:24.676 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:24.676 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:24.676 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:24.937 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:25.197 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:25.197 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:25.197 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:25.198 No valid GPT data, bailing 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:25.198 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:25.458 00:37:25.458 Discovery Log Number of Records 2, Generation counter 2 00:37:25.458 =====Discovery Log Entry 0====== 00:37:25.458 trtype: tcp 00:37:25.458 adrfam: ipv4 00:37:25.458 subtype: current discovery subsystem 00:37:25.458 treq: not specified, sq flow control disable supported 00:37:25.458 portid: 1 00:37:25.458 trsvcid: 4420 00:37:25.458 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:25.458 traddr: 10.0.0.1 00:37:25.458 eflags: none 00:37:25.458 sectype: none 00:37:25.458 =====Discovery Log Entry 1====== 00:37:25.458 trtype: tcp 00:37:25.458 adrfam: ipv4 00:37:25.458 subtype: nvme subsystem 00:37:25.458 treq: not specified, sq flow control disable supported 00:37:25.458 portid: 1 00:37:25.458 trsvcid: 4420 00:37:25.458 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:25.458 traddr: 10.0.0.1 00:37:25.458 eflags: none 00:37:25.458 sectype: none 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:25.458 14:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:28.756 Initializing NVMe Controllers 00:37:28.756 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:28.756 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:28.756 Initialization complete. Launching workers. 00:37:28.756 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66672, failed: 0 00:37:28.756 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66672, failed to submit 0 00:37:28.756 success 0, unsuccessful 66672, failed 0 00:37:28.756 14:01:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:28.756 14:01:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:32.051 Initializing NVMe Controllers 00:37:32.051 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:32.051 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:32.051 Initialization complete. Launching workers. 00:37:32.051 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107739, failed: 0 00:37:32.051 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27078, failed to submit 80661 00:37:32.051 success 0, unsuccessful 27078, failed 0 00:37:32.051 14:01:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:32.051 14:01:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:34.590 Initializing NVMe Controllers 00:37:34.590 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:34.590 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:34.590 Initialization complete. Launching workers. 00:37:34.590 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100543, failed: 0 00:37:34.590 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25110, failed to submit 75433 00:37:34.590 success 0, unsuccessful 25110, failed 0 00:37:34.590 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:34.590 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:34.591 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:34.591 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:34.591 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:34.591 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:34.591 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:34.591 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:34.591 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:34.591 14:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:37.885 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:37.885 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:37.885 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:37.885 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:37.885 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:37.885 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:37.885 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:37.885 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:38.145 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:38.145 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:38.145 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:38.145 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:38.145 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:38.145 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:38.145 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:38.145 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:40.052 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:40.313 00:37:40.313 real 0m19.742s 00:37:40.313 user 0m9.557s 00:37:40.313 sys 0m5.910s 00:37:40.313 14:02:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:40.313 14:02:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:40.313 ************************************ 00:37:40.313 END TEST kernel_target_abort 00:37:40.313 ************************************ 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:40.313 rmmod nvme_tcp 00:37:40.313 rmmod nvme_fabrics 00:37:40.313 rmmod nvme_keyring 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 962809 ']' 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 962809 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 962809 ']' 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 962809 00:37:40.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (962809) - No such process 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 962809 is not found' 00:37:40.313 Process with pid 962809 is not found 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:40.313 14:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:43.614 Waiting for block devices as requested 00:37:43.614 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:43.614 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:43.875 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:43.875 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:43.875 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:44.137 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:44.137 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:44.137 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:44.397 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:44.397 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:44.657 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:44.657 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:44.657 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:44.657 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:44.918 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:44.918 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:44.918 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:45.178 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.726 14:02:10 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:47.726 00:37:47.726 real 0m51.168s 00:37:47.726 user 1m5.399s 00:37:47.726 sys 0m18.295s 00:37:47.726 14:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:47.726 14:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:47.726 ************************************ 00:37:47.726 END TEST nvmf_abort_qd_sizes 00:37:47.726 ************************************ 00:37:47.726 14:02:10 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:47.726 14:02:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:47.726 14:02:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:47.726 14:02:10 -- common/autotest_common.sh@10 -- # set +x 00:37:47.726 ************************************ 00:37:47.726 START TEST keyring_file 00:37:47.726 ************************************ 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:47.726 * Looking for test storage... 00:37:47.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:47.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.726 --rc genhtml_branch_coverage=1 00:37:47.726 --rc genhtml_function_coverage=1 00:37:47.726 --rc genhtml_legend=1 00:37:47.726 --rc geninfo_all_blocks=1 00:37:47.726 --rc geninfo_unexecuted_blocks=1 00:37:47.726 00:37:47.726 ' 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:47.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.726 --rc genhtml_branch_coverage=1 00:37:47.726 --rc genhtml_function_coverage=1 00:37:47.726 --rc genhtml_legend=1 00:37:47.726 --rc geninfo_all_blocks=1 00:37:47.726 --rc geninfo_unexecuted_blocks=1 00:37:47.726 00:37:47.726 ' 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:47.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.726 --rc genhtml_branch_coverage=1 00:37:47.726 --rc genhtml_function_coverage=1 00:37:47.726 --rc genhtml_legend=1 00:37:47.726 --rc geninfo_all_blocks=1 00:37:47.726 --rc geninfo_unexecuted_blocks=1 00:37:47.726 00:37:47.726 ' 00:37:47.726 14:02:10 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:47.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.726 --rc genhtml_branch_coverage=1 00:37:47.726 --rc genhtml_function_coverage=1 00:37:47.726 --rc genhtml_legend=1 00:37:47.726 --rc geninfo_all_blocks=1 00:37:47.726 --rc geninfo_unexecuted_blocks=1 00:37:47.726 00:37:47.726 ' 00:37:47.726 14:02:10 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:47.726 14:02:10 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:47.726 14:02:10 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.726 14:02:10 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.727 14:02:10 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.727 14:02:10 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.727 14:02:10 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.727 14:02:10 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:47.727 14:02:10 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:47.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:47.727 14:02:10 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:47.727 14:02:10 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:47.727 14:02:10 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:47.727 14:02:10 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:47.727 14:02:10 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:47.727 14:02:10 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Hxv9Jm9pwX 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Hxv9Jm9pwX 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Hxv9Jm9pwX 00:37:47.727 14:02:10 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Hxv9Jm9pwX 00:37:47.727 14:02:10 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lhiSftzTMD 00:37:47.727 14:02:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:47.727 14:02:10 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:47.727 14:02:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:47.727 14:02:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:47.727 14:02:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lhiSftzTMD 00:37:47.727 14:02:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lhiSftzTMD 00:37:47.727 14:02:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lhiSftzTMD 00:37:47.727 14:02:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=972963 00:37:47.727 14:02:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 972963 00:37:47.727 14:02:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:47.727 14:02:11 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 972963 ']' 00:37:47.727 14:02:11 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:47.727 14:02:11 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:47.727 14:02:11 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:47.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:47.727 14:02:11 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:47.727 14:02:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:47.988 [2024-11-06 14:02:11.107294] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:37:47.988 [2024-11-06 14:02:11.107372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972963 ] 00:37:47.988 [2024-11-06 14:02:11.182968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.988 [2024-11-06 14:02:11.225917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.560 14:02:11 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:48.560 14:02:11 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:37:48.560 14:02:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:48.560 14:02:11 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.560 14:02:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:48.560 [2024-11-06 14:02:11.893986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.560 null0 00:37:48.560 [2024-11-06 14:02:11.926035] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:48.560 [2024-11-06 14:02:11.926308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.821 14:02:11 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:48.821 [2024-11-06 14:02:11.958105] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:48.821 request: 00:37:48.821 { 00:37:48.821 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:48.821 "secure_channel": false, 00:37:48.821 "listen_address": { 00:37:48.821 "trtype": "tcp", 00:37:48.821 "traddr": "127.0.0.1", 00:37:48.821 "trsvcid": "4420" 00:37:48.821 }, 00:37:48.821 "method": "nvmf_subsystem_add_listener", 00:37:48.821 "req_id": 1 00:37:48.821 } 00:37:48.821 Got JSON-RPC error response 00:37:48.821 response: 00:37:48.821 { 00:37:48.821 "code": -32602, 00:37:48.821 "message": "Invalid parameters" 00:37:48.821 } 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:48.821 14:02:11 keyring_file -- keyring/file.sh@47 -- # bperfpid=973063 00:37:48.821 14:02:11 keyring_file -- keyring/file.sh@49 -- # waitforlisten 973063 /var/tmp/bperf.sock 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 973063 ']' 00:37:48.821 14:02:11 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:48.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:48.821 14:02:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:48.821 [2024-11-06 14:02:12.016739] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:37:48.821 [2024-11-06 14:02:12.016802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973063 ] 00:37:48.821 [2024-11-06 14:02:12.105213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.821 [2024-11-06 14:02:12.140836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:49.764 14:02:12 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:49.764 14:02:12 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:37:49.764 14:02:12 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hxv9Jm9pwX 00:37:49.764 14:02:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Hxv9Jm9pwX 00:37:49.764 14:02:12 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lhiSftzTMD 00:37:49.764 14:02:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lhiSftzTMD 00:37:50.024 14:02:13 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:50.024 14:02:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:50.024 14:02:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:50.024 14:02:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:50.024 14:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:50.024 14:02:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Hxv9Jm9pwX == \/\t\m\p\/\t\m\p\.\H\x\v\9\J\m\9\p\w\X ]] 00:37:50.024 14:02:13 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:50.024 14:02:13 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:50.024 14:02:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:50.024 14:02:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:50.024 14:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:50.284 14:02:13 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.lhiSftzTMD == \/\t\m\p\/\t\m\p\.\l\h\i\S\f\t\z\T\M\D ]] 00:37:50.284 14:02:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:50.284 14:02:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:50.284 14:02:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:50.284 14:02:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:50.284 14:02:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:50.284 14:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:50.548 14:02:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:50.548 14:02:13 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:50.548 14:02:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:50.548 14:02:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:50.548 14:02:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:50.548 14:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:50.548 14:02:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:50.548 14:02:13 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:50.548 14:02:13 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:50.548 14:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:50.810 [2024-11-06 14:02:13.995114] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:50.810 nvme0n1 00:37:50.810 14:02:14 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:50.810 14:02:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:50.810 14:02:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:50.810 14:02:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:50.810 14:02:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:50.810 14:02:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:51.070 14:02:14 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:51.070 14:02:14 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:51.070 14:02:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:51.070 14:02:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:51.070 14:02:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:51.070 14:02:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:51.070 14:02:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:51.070 14:02:14 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:51.070 14:02:14 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:51.330 Running I/O for 1 seconds... 00:37:52.270 15805.00 IOPS, 61.74 MiB/s 00:37:52.270 Latency(us) 00:37:52.270 [2024-11-06T13:02:15.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.270 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:52.270 nvme0n1 : 1.01 15819.64 61.80 0.00 0.00 8059.86 5925.55 18677.76 00:37:52.270 [2024-11-06T13:02:15.646Z] =================================================================================================================== 00:37:52.270 [2024-11-06T13:02:15.646Z] Total : 15819.64 61.80 0.00 0.00 8059.86 5925.55 18677.76 00:37:52.270 { 00:37:52.270 "results": [ 00:37:52.270 { 00:37:52.270 "job": "nvme0n1", 00:37:52.270 "core_mask": "0x2", 00:37:52.270 "workload": "randrw", 00:37:52.270 "percentage": 50, 00:37:52.270 "status": "finished", 00:37:52.270 "queue_depth": 128, 00:37:52.270 "io_size": 4096, 00:37:52.270 "runtime": 1.007292, 00:37:52.270 "iops": 15819.643162062242, 00:37:52.270 "mibps": 61.795481101805635, 00:37:52.270 "io_failed": 0, 00:37:52.270 "io_timeout": 0, 00:37:52.270 "avg_latency_us": 8059.858695952306, 00:37:52.270 "min_latency_us": 5925.546666666667, 00:37:52.270 "max_latency_us": 18677.76 00:37:52.270 } 00:37:52.270 ], 00:37:52.270 "core_count": 1 00:37:52.270 } 00:37:52.270 14:02:15 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:52.270 14:02:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:52.531 14:02:15 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:52.531 14:02:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:52.531 14:02:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:52.531 14:02:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.531 14:02:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:52.531 14:02:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.791 14:02:15 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:52.791 14:02:15 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:52.791 14:02:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:52.791 14:02:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:52.791 14:02:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.791 14:02:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:52.791 14:02:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.791 14:02:16 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:52.791 14:02:16 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:52.791 14:02:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:52.791 14:02:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:52.791 14:02:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:52.791 14:02:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:52.791 14:02:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:52.791 14:02:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:52.791 14:02:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:52.791 14:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:53.052 [2024-11-06 14:02:16.249668] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:53.052 [2024-11-06 14:02:16.249916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245cc10 (107): Transport endpoint is not connected 00:37:53.052 [2024-11-06 14:02:16.250910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245cc10 (9): Bad file descriptor 00:37:53.052 [2024-11-06 14:02:16.251912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:53.052 [2024-11-06 14:02:16.251919] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:53.052 [2024-11-06 14:02:16.251925] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:53.052 [2024-11-06 14:02:16.251931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:53.052 request: 00:37:53.052 { 00:37:53.052 "name": "nvme0", 00:37:53.052 "trtype": "tcp", 00:37:53.052 "traddr": "127.0.0.1", 00:37:53.052 "adrfam": "ipv4", 00:37:53.052 "trsvcid": "4420", 00:37:53.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:53.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:53.052 "prchk_reftag": false, 00:37:53.052 "prchk_guard": false, 00:37:53.052 "hdgst": false, 00:37:53.052 "ddgst": false, 00:37:53.052 "psk": "key1", 00:37:53.052 "allow_unrecognized_csi": false, 00:37:53.052 "method": "bdev_nvme_attach_controller", 00:37:53.052 "req_id": 1 00:37:53.052 } 00:37:53.052 Got JSON-RPC error response 00:37:53.052 response: 00:37:53.052 { 00:37:53.052 "code": -5, 00:37:53.052 "message": "Input/output error" 00:37:53.052 } 00:37:53.052 14:02:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:53.052 14:02:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:53.052 14:02:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:53.052 14:02:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:53.052 14:02:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:53.052 14:02:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:53.052 14:02:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:53.052 14:02:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:53.052 14:02:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:53.052 14:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:53.312 14:02:16 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:53.312 14:02:16 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:53.312 14:02:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:53.312 14:02:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:53.312 14:02:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:53.312 14:02:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:53.312 14:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:53.312 14:02:16 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:53.312 14:02:16 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:53.312 14:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:53.572 14:02:16 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:53.572 14:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:53.572 14:02:16 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:53.572 14:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:53.572 14:02:16 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:53.833 14:02:17 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:53.833 14:02:17 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Hxv9Jm9pwX 00:37:53.833 14:02:17 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hxv9Jm9pwX 00:37:53.833 14:02:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:53.833 14:02:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hxv9Jm9pwX 00:37:53.833 14:02:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:53.833 14:02:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:53.833 14:02:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:53.833 14:02:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:53.833 14:02:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hxv9Jm9pwX 00:37:53.833 14:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Hxv9Jm9pwX 00:37:54.093 [2024-11-06 14:02:17.270529] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Hxv9Jm9pwX': 0100660 00:37:54.093 [2024-11-06 14:02:17.270549] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:54.093 request: 00:37:54.093 { 00:37:54.093 "name": "key0", 00:37:54.093 "path": "/tmp/tmp.Hxv9Jm9pwX", 00:37:54.093 "method": "keyring_file_add_key", 00:37:54.093 "req_id": 1 00:37:54.093 } 00:37:54.093 Got JSON-RPC error response 00:37:54.093 response: 00:37:54.093 { 00:37:54.093 "code": -1, 00:37:54.093 "message": "Operation not permitted" 00:37:54.093 } 00:37:54.093 14:02:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:54.093 14:02:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:54.093 14:02:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:54.093 14:02:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:54.093 14:02:17 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Hxv9Jm9pwX 00:37:54.093 14:02:17 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Hxv9Jm9pwX 00:37:54.093 14:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Hxv9Jm9pwX 00:37:54.353 14:02:17 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Hxv9Jm9pwX 00:37:54.353 14:02:17 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:54.353 14:02:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:54.353 14:02:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:54.353 14:02:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:54.353 14:02:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:54.353 14:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:54.353 14:02:17 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:54.353 14:02:17 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:54.353 14:02:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:54.353 14:02:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:54.353 14:02:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:54.353 14:02:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:54.353 14:02:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:54.353 14:02:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:54.353 14:02:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:54.353 14:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:54.614 [2024-11-06 14:02:17.815917] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Hxv9Jm9pwX': No such file or directory 00:37:54.614 [2024-11-06 14:02:17.815934] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:54.614 [2024-11-06 14:02:17.815946] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:54.614 [2024-11-06 14:02:17.815951] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:54.614 [2024-11-06 14:02:17.815957] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:54.614 [2024-11-06 14:02:17.815962] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:54.614 request: 00:37:54.614 { 00:37:54.614 "name": "nvme0", 00:37:54.614 "trtype": "tcp", 00:37:54.614 "traddr": "127.0.0.1", 00:37:54.614 "adrfam": "ipv4", 00:37:54.614 "trsvcid": "4420", 00:37:54.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:54.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:54.614 "prchk_reftag": false, 00:37:54.614 "prchk_guard": false, 00:37:54.614 "hdgst": false, 00:37:54.614 "ddgst": false, 00:37:54.614 "psk": "key0", 00:37:54.614 "allow_unrecognized_csi": false, 00:37:54.614 "method": "bdev_nvme_attach_controller", 00:37:54.614 "req_id": 1 00:37:54.614 } 00:37:54.614 Got JSON-RPC error response 00:37:54.614 response: 00:37:54.614 { 00:37:54.614 "code": -19, 00:37:54.614 "message": "No such device" 00:37:54.614 } 00:37:54.614 14:02:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:54.614 14:02:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:54.614 14:02:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:54.614 14:02:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:54.614 14:02:17 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:54.614 14:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:54.875 14:02:17 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:54.876 14:02:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:54.876 14:02:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:54.876 14:02:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:54.876 14:02:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:54.876 14:02:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:54.876 14:02:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XC4jku8m1n 00:37:54.876 14:02:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:54.876 14:02:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:54.876 14:02:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:54.876 14:02:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:54.876 14:02:18 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:54.876 14:02:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:54.876 14:02:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:54.876 14:02:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XC4jku8m1n 00:37:54.876 14:02:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XC4jku8m1n 00:37:54.876 14:02:18 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.XC4jku8m1n 00:37:54.876 14:02:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XC4jku8m1n 00:37:54.876 14:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XC4jku8m1n 00:37:54.876 14:02:18 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:54.876 14:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:55.135 nvme0n1 00:37:55.135 14:02:18 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:55.135 14:02:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:55.135 14:02:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:55.135 14:02:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.135 14:02:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:55.135 14:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.395 14:02:18 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:55.395 14:02:18 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:55.395 14:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:55.656 14:02:18 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:55.656 14:02:18 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:55.656 14:02:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.656 14:02:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:55.656 14:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.656 14:02:18 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:55.656 14:02:18 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:55.656 14:02:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:55.656 14:02:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:55.656 14:02:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.656 14:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.656 14:02:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:55.916 14:02:19 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:55.917 14:02:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:55.917 14:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:56.177 14:02:19 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:56.177 14:02:19 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:56.177 14:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:56.177 14:02:19 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:56.177 14:02:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XC4jku8m1n 00:37:56.177 14:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XC4jku8m1n 00:37:56.464 14:02:19 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lhiSftzTMD 00:37:56.464 14:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lhiSftzTMD 00:37:56.792 14:02:19 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:56.792 14:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:56.792 nvme0n1 00:37:56.792 14:02:20 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:56.792 14:02:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:57.084 14:02:20 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:57.084 "subsystems": [ 00:37:57.084 { 00:37:57.084 "subsystem": "keyring", 00:37:57.084 "config": [ 00:37:57.084 { 00:37:57.084 "method": "keyring_file_add_key", 00:37:57.084 "params": { 00:37:57.084 "name": "key0", 00:37:57.084 "path": "/tmp/tmp.XC4jku8m1n" 00:37:57.084 } 00:37:57.084 }, 00:37:57.084 { 00:37:57.084 "method": "keyring_file_add_key", 00:37:57.084 "params": { 00:37:57.084 "name": "key1", 00:37:57.084 "path": "/tmp/tmp.lhiSftzTMD" 00:37:57.084 } 00:37:57.084 } 00:37:57.084 ] 00:37:57.084 }, 00:37:57.084 { 00:37:57.084 "subsystem": "iobuf", 00:37:57.084 "config": [ 00:37:57.084 { 00:37:57.084 "method": "iobuf_set_options", 00:37:57.085 "params": { 00:37:57.085 "small_pool_count": 8192, 00:37:57.085 "large_pool_count": 1024, 00:37:57.085 "small_bufsize": 8192, 00:37:57.085 "large_bufsize": 135168, 00:37:57.085 "enable_numa": false 00:37:57.085 } 00:37:57.085 } 00:37:57.085 ] 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "subsystem": "sock", 00:37:57.085 "config": [ 00:37:57.085 { 00:37:57.085 "method": "sock_set_default_impl", 00:37:57.085 "params": { 00:37:57.085 "impl_name": "posix" 00:37:57.085 } 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "method": "sock_impl_set_options", 00:37:57.085 "params": { 00:37:57.085 "impl_name": "ssl", 00:37:57.085 "recv_buf_size": 4096, 00:37:57.085 "send_buf_size": 4096, 00:37:57.085 "enable_recv_pipe": true, 00:37:57.085 "enable_quickack": false, 00:37:57.085 "enable_placement_id": 0, 00:37:57.085 "enable_zerocopy_send_server": true, 00:37:57.085 "enable_zerocopy_send_client": false, 00:37:57.085 "zerocopy_threshold": 0, 00:37:57.085 "tls_version": 0, 00:37:57.085 "enable_ktls": false 00:37:57.085 } 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "method": "sock_impl_set_options", 00:37:57.085 "params": { 00:37:57.085 "impl_name": "posix", 00:37:57.085 "recv_buf_size": 2097152, 00:37:57.085 "send_buf_size": 2097152, 00:37:57.085 "enable_recv_pipe": true, 00:37:57.085 "enable_quickack": false, 00:37:57.085 "enable_placement_id": 0, 00:37:57.085 "enable_zerocopy_send_server": true, 00:37:57.085 "enable_zerocopy_send_client": false, 00:37:57.085 "zerocopy_threshold": 0, 00:37:57.085 "tls_version": 0, 00:37:57.085 "enable_ktls": false 00:37:57.085 } 00:37:57.085 } 00:37:57.085 ] 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "subsystem": "vmd", 00:37:57.085 "config": [] 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "subsystem": "accel", 00:37:57.085 "config": [ 00:37:57.085 { 00:37:57.085 "method": "accel_set_options", 00:37:57.085 "params": { 00:37:57.085 "small_cache_size": 128, 00:37:57.085 "large_cache_size": 16, 00:37:57.085 "task_count": 2048, 00:37:57.085 "sequence_count": 2048, 00:37:57.085 "buf_count": 2048 00:37:57.085 } 00:37:57.085 } 00:37:57.085 ] 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "subsystem": "bdev", 00:37:57.085 "config": [ 00:37:57.085 { 00:37:57.085 "method": "bdev_set_options", 00:37:57.085 "params": { 00:37:57.085 "bdev_io_pool_size": 65535, 00:37:57.085 "bdev_io_cache_size": 256, 00:37:57.085 "bdev_auto_examine": true, 00:37:57.085 "iobuf_small_cache_size": 128, 00:37:57.085 "iobuf_large_cache_size": 16 00:37:57.085 } 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "method": "bdev_raid_set_options", 00:37:57.085 "params": { 00:37:57.085 "process_window_size_kb": 1024, 00:37:57.085 "process_max_bandwidth_mb_sec": 0 00:37:57.085 } 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "method": "bdev_iscsi_set_options", 00:37:57.085 "params": { 00:37:57.085 "timeout_sec": 30 00:37:57.085 } 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "method": "bdev_nvme_set_options", 00:37:57.085 "params": { 00:37:57.085 "action_on_timeout": "none", 00:37:57.085 "timeout_us": 0, 00:37:57.085 "timeout_admin_us": 0, 00:37:57.085 "keep_alive_timeout_ms": 10000, 00:37:57.085 "arbitration_burst": 0, 00:37:57.085 "low_priority_weight": 0, 00:37:57.085 "medium_priority_weight": 0, 00:37:57.085 "high_priority_weight": 0, 00:37:57.085 "nvme_adminq_poll_period_us": 10000, 00:37:57.085 "nvme_ioq_poll_period_us": 0, 00:37:57.085 "io_queue_requests": 512, 00:37:57.085 "delay_cmd_submit": true, 00:37:57.085 "transport_retry_count": 4, 00:37:57.085 "bdev_retry_count": 3, 00:37:57.085 "transport_ack_timeout": 0, 00:37:57.085 "ctrlr_loss_timeout_sec": 0, 00:37:57.085 "reconnect_delay_sec": 0, 00:37:57.085 "fast_io_fail_timeout_sec": 0, 00:37:57.085 "disable_auto_failback": false, 00:37:57.085 "generate_uuids": false, 00:37:57.085 "transport_tos": 0, 00:37:57.085 "nvme_error_stat": false, 00:37:57.085 "rdma_srq_size": 0, 00:37:57.085 "io_path_stat": false, 00:37:57.085 "allow_accel_sequence": false, 00:37:57.085 "rdma_max_cq_size": 0, 00:37:57.085 "rdma_cm_event_timeout_ms": 0, 00:37:57.085 "dhchap_digests": [ 00:37:57.085 "sha256", 00:37:57.085 "sha384", 00:37:57.085 "sha512" 00:37:57.085 ], 00:37:57.085 "dhchap_dhgroups": [ 00:37:57.085 "null", 00:37:57.085 "ffdhe2048", 00:37:57.085 "ffdhe3072", 00:37:57.085 "ffdhe4096", 00:37:57.085 "ffdhe6144", 00:37:57.085 "ffdhe8192" 00:37:57.085 ] 00:37:57.085 } 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "method": "bdev_nvme_attach_controller", 00:37:57.085 "params": { 00:37:57.085 "name": "nvme0", 00:37:57.085 "trtype": "TCP", 00:37:57.085 "adrfam": "IPv4", 00:37:57.085 "traddr": "127.0.0.1", 00:37:57.085 "trsvcid": "4420", 00:37:57.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:57.085 "prchk_reftag": false, 00:37:57.085 "prchk_guard": false, 00:37:57.085 "ctrlr_loss_timeout_sec": 0, 00:37:57.085 "reconnect_delay_sec": 0, 00:37:57.085 "fast_io_fail_timeout_sec": 0, 00:37:57.085 "psk": "key0", 00:37:57.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:57.085 "hdgst": false, 00:37:57.085 "ddgst": false, 00:37:57.085 "multipath": "multipath" 00:37:57.085 } 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "method": "bdev_nvme_set_hotplug", 00:37:57.085 "params": { 00:37:57.085 "period_us": 100000, 00:37:57.085 "enable": false 00:37:57.085 } 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "method": "bdev_wait_for_examine" 00:37:57.085 } 00:37:57.085 ] 00:37:57.085 }, 00:37:57.085 { 00:37:57.085 "subsystem": "nbd", 00:37:57.085 "config": [] 00:37:57.085 } 00:37:57.085 ] 00:37:57.085 }' 00:37:57.085 14:02:20 keyring_file -- keyring/file.sh@115 -- # killprocess 973063 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 973063 ']' 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@956 -- # kill -0 973063 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@957 -- # uname 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 973063 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 973063' 00:37:57.085 killing process with pid 973063 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@971 -- # kill 973063 00:37:57.085 Received shutdown signal, test time was about 1.000000 seconds 00:37:57.085 00:37:57.085 Latency(us) 00:37:57.085 [2024-11-06T13:02:20.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.085 [2024-11-06T13:02:20.461Z] =================================================================================================================== 00:37:57.085 [2024-11-06T13:02:20.461Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:57.085 14:02:20 keyring_file -- common/autotest_common.sh@976 -- # wait 973063 00:37:57.346 14:02:20 keyring_file -- keyring/file.sh@118 -- # bperfpid=974876 00:37:57.346 14:02:20 keyring_file -- keyring/file.sh@120 -- # waitforlisten 974876 /var/tmp/bperf.sock 00:37:57.346 14:02:20 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 974876 ']' 00:37:57.346 14:02:20 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:57.346 14:02:20 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:57.346 14:02:20 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:57.346 14:02:20 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:57.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:57.346 14:02:20 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:57.346 14:02:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:57.346 14:02:20 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:57.346 "subsystems": [ 00:37:57.346 { 00:37:57.346 "subsystem": "keyring", 00:37:57.346 "config": [ 00:37:57.346 { 00:37:57.346 "method": "keyring_file_add_key", 00:37:57.346 "params": { 00:37:57.346 "name": "key0", 00:37:57.346 "path": "/tmp/tmp.XC4jku8m1n" 00:37:57.346 } 00:37:57.346 }, 00:37:57.346 { 00:37:57.346 "method": "keyring_file_add_key", 00:37:57.346 "params": { 00:37:57.346 "name": "key1", 00:37:57.346 "path": "/tmp/tmp.lhiSftzTMD" 00:37:57.346 } 00:37:57.346 } 00:37:57.347 ] 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "subsystem": "iobuf", 00:37:57.347 "config": [ 00:37:57.347 { 00:37:57.347 "method": "iobuf_set_options", 00:37:57.347 "params": { 00:37:57.347 "small_pool_count": 8192, 00:37:57.347 "large_pool_count": 1024, 00:37:57.347 "small_bufsize": 8192, 00:37:57.347 "large_bufsize": 135168, 00:37:57.347 "enable_numa": false 00:37:57.347 } 00:37:57.347 } 00:37:57.347 ] 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "subsystem": "sock", 00:37:57.347 "config": [ 00:37:57.347 { 00:37:57.347 "method": "sock_set_default_impl", 00:37:57.347 "params": { 00:37:57.347 "impl_name": "posix" 00:37:57.347 } 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "method": "sock_impl_set_options", 00:37:57.347 "params": { 00:37:57.347 "impl_name": "ssl", 00:37:57.347 "recv_buf_size": 4096, 00:37:57.347 "send_buf_size": 4096, 00:37:57.347 "enable_recv_pipe": true, 00:37:57.347 "enable_quickack": false, 00:37:57.347 "enable_placement_id": 0, 00:37:57.347 "enable_zerocopy_send_server": true, 00:37:57.347 "enable_zerocopy_send_client": false, 00:37:57.347 "zerocopy_threshold": 0, 00:37:57.347 "tls_version": 0, 00:37:57.347 "enable_ktls": false 00:37:57.347 } 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "method": "sock_impl_set_options", 00:37:57.347 "params": { 00:37:57.347 "impl_name": "posix", 00:37:57.347 "recv_buf_size": 2097152, 00:37:57.347 "send_buf_size": 2097152, 00:37:57.347 "enable_recv_pipe": true, 00:37:57.347 "enable_quickack": false, 00:37:57.347 "enable_placement_id": 0, 00:37:57.347 "enable_zerocopy_send_server": true, 00:37:57.347 "enable_zerocopy_send_client": false, 00:37:57.347 "zerocopy_threshold": 0, 00:37:57.347 "tls_version": 0, 00:37:57.347 "enable_ktls": false 00:37:57.347 } 00:37:57.347 } 00:37:57.347 ] 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "subsystem": "vmd", 00:37:57.347 "config": [] 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "subsystem": "accel", 00:37:57.347 "config": [ 00:37:57.347 { 00:37:57.347 "method": "accel_set_options", 00:37:57.347 "params": { 00:37:57.347 "small_cache_size": 128, 00:37:57.347 "large_cache_size": 16, 00:37:57.347 "task_count": 2048, 00:37:57.347 "sequence_count": 2048, 00:37:57.347 "buf_count": 2048 00:37:57.347 } 00:37:57.347 } 00:37:57.347 ] 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "subsystem": "bdev", 00:37:57.347 "config": [ 00:37:57.347 { 00:37:57.347 "method": "bdev_set_options", 00:37:57.347 "params": { 00:37:57.347 "bdev_io_pool_size": 65535, 00:37:57.347 "bdev_io_cache_size": 256, 00:37:57.347 "bdev_auto_examine": true, 00:37:57.347 "iobuf_small_cache_size": 128, 00:37:57.347 "iobuf_large_cache_size": 16 00:37:57.347 } 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "method": "bdev_raid_set_options", 00:37:57.347 "params": { 00:37:57.347 "process_window_size_kb": 1024, 00:37:57.347 "process_max_bandwidth_mb_sec": 0 00:37:57.347 } 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "method": "bdev_iscsi_set_options", 00:37:57.347 "params": { 00:37:57.347 "timeout_sec": 30 00:37:57.347 } 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "method": "bdev_nvme_set_options", 00:37:57.347 "params": { 00:37:57.347 "action_on_timeout": "none", 00:37:57.347 "timeout_us": 0, 00:37:57.347 "timeout_admin_us": 0, 00:37:57.347 "keep_alive_timeout_ms": 10000, 00:37:57.347 "arbitration_burst": 0, 00:37:57.347 "low_priority_weight": 0, 00:37:57.347 "medium_priority_weight": 0, 00:37:57.347 "high_priority_weight": 0, 00:37:57.347 "nvme_adminq_poll_period_us": 10000, 00:37:57.347 "nvme_ioq_poll_period_us": 0, 00:37:57.347 "io_queue_requests": 512, 00:37:57.347 "delay_cmd_submit": true, 00:37:57.347 "transport_retry_count": 4, 00:37:57.347 "bdev_retry_count": 3, 00:37:57.347 "transport_ack_timeout": 0, 00:37:57.347 "ctrlr_loss_timeout_sec": 0, 00:37:57.347 "reconnect_delay_sec": 0, 00:37:57.347 "fast_io_fail_timeout_sec": 0, 00:37:57.347 "disable_auto_failback": false, 00:37:57.347 "generate_uuids": false, 00:37:57.347 "transport_tos": 0, 00:37:57.347 "nvme_error_stat": false, 00:37:57.347 "rdma_srq_size": 0, 00:37:57.347 "io_path_stat": false, 00:37:57.347 "allow_accel_sequence": false, 00:37:57.347 "rdma_max_cq_size": 0, 00:37:57.347 "rdma_cm_event_timeout_ms": 0, 00:37:57.347 "dhchap_digests": [ 00:37:57.347 "sha256", 00:37:57.347 "sha384", 00:37:57.347 "sha512" 00:37:57.347 ], 00:37:57.347 "dhchap_dhgroups": [ 00:37:57.347 "null", 00:37:57.347 "ffdhe2048", 00:37:57.347 "ffdhe3072", 00:37:57.347 "ffdhe4096", 00:37:57.347 "ffdhe6144", 00:37:57.347 "ffdhe8192" 00:37:57.347 ] 00:37:57.347 } 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "method": "bdev_nvme_attach_controller", 00:37:57.347 "params": { 00:37:57.347 "name": "nvme0", 00:37:57.347 "trtype": "TCP", 00:37:57.347 "adrfam": "IPv4", 00:37:57.347 "traddr": "127.0.0.1", 00:37:57.347 "trsvcid": "4420", 00:37:57.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:57.347 "prchk_reftag": false, 00:37:57.347 "prchk_guard": false, 00:37:57.347 "ctrlr_loss_timeout_sec": 0, 00:37:57.347 "reconnect_delay_sec": 0, 00:37:57.347 "fast_io_fail_timeout_sec": 0, 00:37:57.347 "psk": "key0", 00:37:57.347 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:57.347 "hdgst": false, 00:37:57.347 "ddgst": false, 00:37:57.347 "multipath": "multipath" 00:37:57.347 } 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "method": "bdev_nvme_set_hotplug", 00:37:57.347 "params": { 00:37:57.347 "period_us": 100000, 00:37:57.347 "enable": false 00:37:57.347 } 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "method": "bdev_wait_for_examine" 00:37:57.347 } 00:37:57.347 ] 00:37:57.347 }, 00:37:57.347 { 00:37:57.347 "subsystem": "nbd", 00:37:57.347 "config": [] 00:37:57.347 } 00:37:57.347 ] 00:37:57.347 }' 00:37:57.347 [2024-11-06 14:02:20.596337] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:37:57.347 [2024-11-06 14:02:20.596393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974876 ] 00:37:57.347 [2024-11-06 14:02:20.678362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.347 [2024-11-06 14:02:20.707992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.608 [2024-11-06 14:02:20.850874] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:58.179 14:02:21 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:58.179 14:02:21 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:37:58.179 14:02:21 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:58.179 14:02:21 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:58.179 14:02:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.439 14:02:21 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:58.439 14:02:21 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.439 14:02:21 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:58.439 14:02:21 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:58.439 14:02:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.702 14:02:21 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:58.702 14:02:21 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:58.702 14:02:21 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:58.702 14:02:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:58.967 14:02:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:58.967 14:02:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:58.967 14:02:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XC4jku8m1n /tmp/tmp.lhiSftzTMD 00:37:58.967 14:02:22 keyring_file -- keyring/file.sh@20 -- # killprocess 974876 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 974876 ']' 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@956 -- # kill -0 974876 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@957 -- # uname 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 974876 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 974876' 00:37:58.967 killing process with pid 974876 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@971 -- # kill 974876 00:37:58.967 Received shutdown signal, test time was about 1.000000 seconds 00:37:58.967 00:37:58.967 Latency(us) 00:37:58.967 [2024-11-06T13:02:22.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.967 [2024-11-06T13:02:22.343Z] =================================================================================================================== 00:37:58.967 [2024-11-06T13:02:22.343Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@976 -- # wait 974876 00:37:58.967 14:02:22 keyring_file -- keyring/file.sh@21 -- # killprocess 972963 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 972963 ']' 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@956 -- # kill -0 972963 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@957 -- # uname 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 972963 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 972963' 00:37:58.967 killing process with pid 972963 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@971 -- # kill 972963 00:37:58.967 14:02:22 keyring_file -- common/autotest_common.sh@976 -- # wait 972963 00:37:59.226 00:37:59.226 real 0m11.835s 00:37:59.226 user 0m28.508s 00:37:59.226 sys 0m2.600s 00:37:59.226 14:02:22 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:59.226 14:02:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:59.226 ************************************ 00:37:59.226 END TEST keyring_file 00:37:59.226 ************************************ 00:37:59.226 14:02:22 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:59.226 14:02:22 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:59.226 14:02:22 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:59.226 14:02:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:59.226 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:37:59.226 ************************************ 00:37:59.226 START TEST keyring_linux 00:37:59.226 ************************************ 00:37:59.226 14:02:22 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:59.486 Joined session keyring: 1008610703 00:37:59.486 * Looking for test storage... 00:37:59.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:59.486 14:02:22 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:59.486 14:02:22 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:37:59.486 14:02:22 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:59.486 14:02:22 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:59.486 14:02:22 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:59.486 14:02:22 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:59.486 14:02:22 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:59.486 14:02:22 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:59.486 14:02:22 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:59.486 14:02:22 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:59.486 14:02:22 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:59.486 14:02:22 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:59.486 14:02:22 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:59.487 14:02:22 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:59.487 14:02:22 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:59.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.487 --rc genhtml_branch_coverage=1 00:37:59.487 --rc genhtml_function_coverage=1 00:37:59.487 --rc genhtml_legend=1 00:37:59.487 --rc geninfo_all_blocks=1 00:37:59.487 --rc geninfo_unexecuted_blocks=1 00:37:59.487 00:37:59.487 ' 00:37:59.487 14:02:22 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:59.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.487 --rc genhtml_branch_coverage=1 00:37:59.487 --rc genhtml_function_coverage=1 00:37:59.487 --rc genhtml_legend=1 00:37:59.487 --rc geninfo_all_blocks=1 00:37:59.487 --rc geninfo_unexecuted_blocks=1 00:37:59.487 00:37:59.487 ' 00:37:59.487 14:02:22 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:59.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.487 --rc genhtml_branch_coverage=1 00:37:59.487 --rc genhtml_function_coverage=1 00:37:59.487 --rc genhtml_legend=1 00:37:59.487 --rc geninfo_all_blocks=1 00:37:59.487 --rc geninfo_unexecuted_blocks=1 00:37:59.487 00:37:59.487 ' 00:37:59.487 14:02:22 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:59.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.487 --rc genhtml_branch_coverage=1 00:37:59.487 --rc genhtml_function_coverage=1 00:37:59.487 --rc genhtml_legend=1 00:37:59.487 --rc geninfo_all_blocks=1 00:37:59.487 --rc geninfo_unexecuted_blocks=1 00:37:59.487 00:37:59.487 ' 00:37:59.487 14:02:22 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:59.487 14:02:22 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.487 14:02:22 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.487 14:02:22 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.487 14:02:22 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.487 14:02:22 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.487 14:02:22 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:59.487 14:02:22 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:59.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:59.487 14:02:22 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:59.487 14:02:22 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:59.487 14:02:22 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:59.487 14:02:22 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:59.487 14:02:22 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:59.487 14:02:22 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:59.487 14:02:22 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:59.487 14:02:22 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:59.487 14:02:22 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:59.487 14:02:22 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:59.487 14:02:22 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:59.487 14:02:22 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:59.487 14:02:22 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:59.487 14:02:22 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:59.747 /tmp/:spdk-test:key0 00:37:59.747 14:02:22 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:59.747 14:02:22 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:59.747 14:02:22 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:59.747 14:02:22 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:59.747 14:02:22 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:59.747 14:02:22 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:59.747 14:02:22 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:59.747 14:02:22 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:59.747 /tmp/:spdk-test:key1 00:37:59.747 14:02:22 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=975312 00:37:59.747 14:02:22 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 975312 00:37:59.747 14:02:22 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:59.747 14:02:22 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 975312 ']' 00:37:59.747 14:02:22 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.747 14:02:22 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:59.747 14:02:22 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.747 14:02:22 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:59.747 14:02:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:59.747 [2024-11-06 14:02:22.982532] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:37:59.747 [2024-11-06 14:02:22.982606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975312 ] 00:37:59.747 [2024-11-06 14:02:23.055571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.747 [2024-11-06 14:02:23.091935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:00.006 14:02:23 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:00.006 [2024-11-06 14:02:23.290088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:00.006 null0 00:38:00.006 [2024-11-06 14:02:23.322139] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:00.006 [2024-11-06 14:02:23.322537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:00.006 14:02:23 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:00.006 1073387717 00:38:00.006 14:02:23 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:00.006 424885977 00:38:00.006 14:02:23 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=975375 00:38:00.006 14:02:23 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 975375 /var/tmp/bperf.sock 00:38:00.006 14:02:23 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 975375 ']' 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:00.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:00.006 14:02:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:00.267 [2024-11-06 14:02:23.399218] Starting SPDK v25.01-pre git sha1 cfcfe6c3e / DPDK 24.03.0 initialization... 00:38:00.267 [2024-11-06 14:02:23.399266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975375 ] 00:38:00.267 [2024-11-06 14:02:23.483879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.267 [2024-11-06 14:02:23.513877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:00.837 14:02:24 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:00.837 14:02:24 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:00.837 14:02:24 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:00.837 14:02:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:01.097 14:02:24 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:01.097 14:02:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:01.356 14:02:24 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:01.356 14:02:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:01.615 [2024-11-06 14:02:24.757755] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:01.615 nvme0n1 00:38:01.615 14:02:24 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:01.615 14:02:24 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:01.615 14:02:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:01.615 14:02:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:01.615 14:02:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:01.615 14:02:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:01.875 14:02:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:01.875 14:02:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.875 14:02:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@25 -- # sn=1073387717 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@26 -- # [[ 1073387717 == \1\0\7\3\3\8\7\7\1\7 ]] 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1073387717 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:01.875 14:02:25 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:02.134 Running I/O for 1 seconds... 00:38:03.072 15959.00 IOPS, 62.34 MiB/s 00:38:03.072 Latency(us) 00:38:03.072 [2024-11-06T13:02:26.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.072 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:03.072 nvme0n1 : 1.01 15961.81 62.35 0.00 0.00 7981.23 6690.13 15510.19 00:38:03.072 [2024-11-06T13:02:26.448Z] =================================================================================================================== 00:38:03.072 [2024-11-06T13:02:26.448Z] Total : 15961.81 62.35 0.00 0.00 7981.23 6690.13 15510.19 00:38:03.072 { 00:38:03.072 "results": [ 00:38:03.072 { 00:38:03.072 "job": "nvme0n1", 00:38:03.072 "core_mask": "0x2", 00:38:03.072 "workload": "randread", 00:38:03.072 "status": "finished", 00:38:03.072 "queue_depth": 128, 00:38:03.072 "io_size": 4096, 00:38:03.072 "runtime": 1.007906, 00:38:03.072 "iops": 15961.805962063923, 00:38:03.072 "mibps": 62.3508045393122, 00:38:03.072 "io_failed": 0, 00:38:03.072 "io_timeout": 0, 00:38:03.072 "avg_latency_us": 7981.227939665174, 00:38:03.072 "min_latency_us": 6690.133333333333, 00:38:03.072 "max_latency_us": 15510.186666666666 00:38:03.072 } 00:38:03.072 ], 00:38:03.072 "core_count": 1 00:38:03.072 } 00:38:03.072 14:02:26 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:03.072 14:02:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:03.333 14:02:26 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:03.333 14:02:26 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:03.333 14:02:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:03.333 14:02:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:03.333 14:02:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:03.333 14:02:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.333 14:02:26 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:03.333 14:02:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:03.333 14:02:26 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:03.333 14:02:26 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:03.333 14:02:26 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:03.333 14:02:26 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:03.333 14:02:26 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:03.333 14:02:26 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:03.333 14:02:26 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:03.333 14:02:26 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:03.333 14:02:26 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:03.333 14:02:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:03.594 [2024-11-06 14:02:26.822915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:03.594 [2024-11-06 14:02:26.823670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a9480 (107): Transport endpoint is not connected 00:38:03.594 [2024-11-06 14:02:26.824666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a9480 (9): Bad file descriptor 00:38:03.594 [2024-11-06 14:02:26.825667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:03.594 [2024-11-06 14:02:26.825674] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:03.594 [2024-11-06 14:02:26.825680] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:03.594 [2024-11-06 14:02:26.825687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:03.594 request: 00:38:03.594 { 00:38:03.594 "name": "nvme0", 00:38:03.594 "trtype": "tcp", 00:38:03.594 "traddr": "127.0.0.1", 00:38:03.594 "adrfam": "ipv4", 00:38:03.594 "trsvcid": "4420", 00:38:03.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:03.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:03.594 "prchk_reftag": false, 00:38:03.594 "prchk_guard": false, 00:38:03.594 "hdgst": false, 00:38:03.594 "ddgst": false, 00:38:03.594 "psk": ":spdk-test:key1", 00:38:03.594 "allow_unrecognized_csi": false, 00:38:03.594 "method": "bdev_nvme_attach_controller", 00:38:03.594 "req_id": 1 00:38:03.594 } 00:38:03.594 Got JSON-RPC error response 00:38:03.594 response: 00:38:03.594 { 00:38:03.594 "code": -5, 00:38:03.594 "message": "Input/output error" 00:38:03.594 } 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@33 -- # sn=1073387717 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1073387717 00:38:03.594 1 links removed 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@33 -- # sn=424885977 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 424885977 00:38:03.594 1 links removed 00:38:03.594 14:02:26 keyring_linux -- keyring/linux.sh@41 -- # killprocess 975375 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 975375 ']' 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 975375 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 975375 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 975375' 00:38:03.594 killing process with pid 975375 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@971 -- # kill 975375 00:38:03.594 Received shutdown signal, test time was about 1.000000 seconds 00:38:03.594 00:38:03.594 Latency(us) 00:38:03.594 [2024-11-06T13:02:26.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.594 [2024-11-06T13:02:26.970Z] =================================================================================================================== 00:38:03.594 [2024-11-06T13:02:26.970Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:03.594 14:02:26 keyring_linux -- common/autotest_common.sh@976 -- # wait 975375 00:38:03.855 14:02:27 keyring_linux -- keyring/linux.sh@42 -- # killprocess 975312 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 975312 ']' 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 975312 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 975312 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 975312' 00:38:03.855 killing process with pid 975312 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@971 -- # kill 975312 00:38:03.855 14:02:27 keyring_linux -- common/autotest_common.sh@976 -- # wait 975312 00:38:04.116 00:38:04.116 real 0m4.695s 00:38:04.116 user 0m9.063s 00:38:04.116 sys 0m1.323s 00:38:04.116 14:02:27 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:04.116 14:02:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:04.116 ************************************ 00:38:04.116 END TEST keyring_linux 00:38:04.116 ************************************ 00:38:04.116 14:02:27 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:04.116 14:02:27 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:04.116 14:02:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:04.116 14:02:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:04.116 14:02:27 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:04.116 14:02:27 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:04.116 14:02:27 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:04.116 14:02:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:04.116 14:02:27 -- common/autotest_common.sh@10 -- # set +x 00:38:04.116 14:02:27 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:04.116 14:02:27 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:38:04.116 14:02:27 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:38:04.116 14:02:27 -- common/autotest_common.sh@10 -- # set +x 00:38:12.248 INFO: APP EXITING 00:38:12.248 INFO: killing all VMs 00:38:12.248 INFO: killing vhost app 00:38:12.248 WARN: no vhost pid file found 00:38:12.248 INFO: EXIT DONE 00:38:15.573 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:15.573 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:15.573 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:18.867 Cleaning 00:38:18.867 Removing: /var/run/dpdk/spdk0/config 00:38:18.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:18.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:18.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:18.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:18.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:18.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:18.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:18.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:18.867 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:18.867 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:18.867 Removing: /var/run/dpdk/spdk1/config 00:38:18.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:18.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:18.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:18.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:18.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:18.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:18.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:18.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:18.867 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:18.867 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:18.867 Removing: /var/run/dpdk/spdk2/config 00:38:18.867 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:18.867 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:18.867 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:19.126 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:19.126 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:19.126 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:19.126 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:19.126 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:19.126 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:19.126 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:19.126 Removing: /var/run/dpdk/spdk3/config 00:38:19.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:19.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:19.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:19.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:19.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:19.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:19.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:19.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:19.127 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:19.127 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:19.127 Removing: /var/run/dpdk/spdk4/config 00:38:19.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:19.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:19.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:19.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:19.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:19.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:19.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:19.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:19.127 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:19.127 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:19.127 Removing: /dev/shm/bdev_svc_trace.1 00:38:19.127 Removing: /dev/shm/nvmf_trace.0 00:38:19.127 Removing: /dev/shm/spdk_tgt_trace.pid398339 00:38:19.127 Removing: /var/run/dpdk/spdk0 00:38:19.127 Removing: /var/run/dpdk/spdk1 00:38:19.127 Removing: /var/run/dpdk/spdk2 00:38:19.127 Removing: /var/run/dpdk/spdk3 00:38:19.127 Removing: /var/run/dpdk/spdk4 00:38:19.127 Removing: /var/run/dpdk/spdk_pid396852 00:38:19.127 Removing: /var/run/dpdk/spdk_pid398339 00:38:19.127 Removing: /var/run/dpdk/spdk_pid399193 00:38:19.127 Removing: /var/run/dpdk/spdk_pid400229 00:38:19.127 Removing: /var/run/dpdk/spdk_pid400568 00:38:19.127 Removing: /var/run/dpdk/spdk_pid401641 00:38:19.127 Removing: /var/run/dpdk/spdk_pid401895 00:38:19.127 Removing: /var/run/dpdk/spdk_pid402116 00:38:19.127 Removing: /var/run/dpdk/spdk_pid403246 00:38:19.127 Removing: /var/run/dpdk/spdk_pid404037 00:38:19.127 Removing: /var/run/dpdk/spdk_pid404425 00:38:19.127 Removing: /var/run/dpdk/spdk_pid404826 00:38:19.127 Removing: /var/run/dpdk/spdk_pid405229 00:38:19.127 Removing: /var/run/dpdk/spdk_pid405547 00:38:19.127 Removing: /var/run/dpdk/spdk_pid405697 00:38:19.127 Removing: /var/run/dpdk/spdk_pid406030 00:38:19.127 Removing: /var/run/dpdk/spdk_pid406420 00:38:19.127 Removing: /var/run/dpdk/spdk_pid407484 00:38:19.127 Removing: /var/run/dpdk/spdk_pid410749 00:38:19.127 Removing: /var/run/dpdk/spdk_pid411122 00:38:19.127 Removing: /var/run/dpdk/spdk_pid411479 00:38:19.127 Removing: /var/run/dpdk/spdk_pid411811 00:38:19.127 Removing: /var/run/dpdk/spdk_pid412185 00:38:19.127 Removing: /var/run/dpdk/spdk_pid412243 00:38:19.387 Removing: /var/run/dpdk/spdk_pid412896 00:38:19.387 Removing: /var/run/dpdk/spdk_pid412914 00:38:19.387 Removing: /var/run/dpdk/spdk_pid413275 00:38:19.387 Removing: /var/run/dpdk/spdk_pid413507 00:38:19.387 Removing: /var/run/dpdk/spdk_pid413649 00:38:19.387 Removing: /var/run/dpdk/spdk_pid413974 00:38:19.387 Removing: /var/run/dpdk/spdk_pid414431 00:38:19.387 Removing: /var/run/dpdk/spdk_pid414787 00:38:19.387 Removing: /var/run/dpdk/spdk_pid415092 00:38:19.387 Removing: /var/run/dpdk/spdk_pid419729 00:38:19.387 Removing: /var/run/dpdk/spdk_pid425023 00:38:19.387 Removing: /var/run/dpdk/spdk_pid437716 00:38:19.387 Removing: /var/run/dpdk/spdk_pid438399 00:38:19.387 Removing: /var/run/dpdk/spdk_pid443549 00:38:19.387 Removing: /var/run/dpdk/spdk_pid444031 00:38:19.387 Removing: /var/run/dpdk/spdk_pid449215 00:38:19.387 Removing: /var/run/dpdk/spdk_pid456302 00:38:19.387 Removing: /var/run/dpdk/spdk_pid459408 00:38:19.387 Removing: /var/run/dpdk/spdk_pid471955 00:38:19.387 Removing: /var/run/dpdk/spdk_pid483099 00:38:19.387 Removing: /var/run/dpdk/spdk_pid485578 00:38:19.387 Removing: /var/run/dpdk/spdk_pid486595 00:38:19.387 Removing: /var/run/dpdk/spdk_pid507596 00:38:19.387 Removing: /var/run/dpdk/spdk_pid512355 00:38:19.387 Removing: /var/run/dpdk/spdk_pid569204 00:38:19.387 Removing: /var/run/dpdk/spdk_pid575582 00:38:19.387 Removing: /var/run/dpdk/spdk_pid582673 00:38:19.387 Removing: /var/run/dpdk/spdk_pid590454 00:38:19.387 Removing: /var/run/dpdk/spdk_pid590457 00:38:19.387 Removing: /var/run/dpdk/spdk_pid591744 00:38:19.387 Removing: /var/run/dpdk/spdk_pid592922 00:38:19.387 Removing: /var/run/dpdk/spdk_pid593944 00:38:19.387 Removing: /var/run/dpdk/spdk_pid594606 00:38:19.387 Removing: /var/run/dpdk/spdk_pid594735 00:38:19.387 Removing: /var/run/dpdk/spdk_pid594961 00:38:19.387 Removing: /var/run/dpdk/spdk_pid595225 00:38:19.387 Removing: /var/run/dpdk/spdk_pid595274 00:38:19.387 Removing: /var/run/dpdk/spdk_pid596279 00:38:19.387 Removing: /var/run/dpdk/spdk_pid597284 00:38:19.387 Removing: /var/run/dpdk/spdk_pid598290 00:38:19.387 Removing: /var/run/dpdk/spdk_pid598962 00:38:19.387 Removing: /var/run/dpdk/spdk_pid598965 00:38:19.387 Removing: /var/run/dpdk/spdk_pid599303 00:38:19.387 Removing: /var/run/dpdk/spdk_pid600759 00:38:19.387 Removing: /var/run/dpdk/spdk_pid602165 00:38:19.387 Removing: /var/run/dpdk/spdk_pid612151 00:38:19.387 Removing: /var/run/dpdk/spdk_pid648321 00:38:19.387 Removing: /var/run/dpdk/spdk_pid653776 00:38:19.387 Removing: /var/run/dpdk/spdk_pid655724 00:38:19.387 Removing: /var/run/dpdk/spdk_pid657940 00:38:19.387 Removing: /var/run/dpdk/spdk_pid658079 00:38:19.387 Removing: /var/run/dpdk/spdk_pid658203 00:38:19.387 Removing: /var/run/dpdk/spdk_pid658429 00:38:19.387 Removing: /var/run/dpdk/spdk_pid659040 00:38:19.387 Removing: /var/run/dpdk/spdk_pid661154 00:38:19.387 Removing: /var/run/dpdk/spdk_pid662236 00:38:19.387 Removing: /var/run/dpdk/spdk_pid662616 00:38:19.387 Removing: /var/run/dpdk/spdk_pid665325 00:38:19.387 Removing: /var/run/dpdk/spdk_pid666046 00:38:19.387 Removing: /var/run/dpdk/spdk_pid666834 00:38:19.387 Removing: /var/run/dpdk/spdk_pid671924 00:38:19.387 Removing: /var/run/dpdk/spdk_pid679075 00:38:19.646 Removing: /var/run/dpdk/spdk_pid679076 00:38:19.646 Removing: /var/run/dpdk/spdk_pid679077 00:38:19.646 Removing: /var/run/dpdk/spdk_pid683770 00:38:19.646 Removing: /var/run/dpdk/spdk_pid694011 00:38:19.646 Removing: /var/run/dpdk/spdk_pid698835 00:38:19.646 Removing: /var/run/dpdk/spdk_pid706067 00:38:19.646 Removing: /var/run/dpdk/spdk_pid707618 00:38:19.646 Removing: /var/run/dpdk/spdk_pid709422 00:38:19.646 Removing: /var/run/dpdk/spdk_pid711104 00:38:19.646 Removing: /var/run/dpdk/spdk_pid716643 00:38:19.646 Removing: /var/run/dpdk/spdk_pid722095 00:38:19.646 Removing: /var/run/dpdk/spdk_pid727117 00:38:19.646 Removing: /var/run/dpdk/spdk_pid736705 00:38:19.646 Removing: /var/run/dpdk/spdk_pid736798 00:38:19.646 Removing: /var/run/dpdk/spdk_pid741852 00:38:19.646 Removing: /var/run/dpdk/spdk_pid742182 00:38:19.646 Removing: /var/run/dpdk/spdk_pid742337 00:38:19.646 Removing: /var/run/dpdk/spdk_pid742854 00:38:19.646 Removing: /var/run/dpdk/spdk_pid742865 00:38:19.646 Removing: /var/run/dpdk/spdk_pid748557 00:38:19.646 Removing: /var/run/dpdk/spdk_pid749093 00:38:19.646 Removing: /var/run/dpdk/spdk_pid754563 00:38:19.646 Removing: /var/run/dpdk/spdk_pid757704 00:38:19.646 Removing: /var/run/dpdk/spdk_pid764299 00:38:19.646 Removing: /var/run/dpdk/spdk_pid770841 00:38:19.646 Removing: /var/run/dpdk/spdk_pid780787 00:38:19.646 Removing: /var/run/dpdk/spdk_pid789684 00:38:19.646 Removing: /var/run/dpdk/spdk_pid789719 00:38:19.646 Removing: /var/run/dpdk/spdk_pid813177 00:38:19.646 Removing: /var/run/dpdk/spdk_pid813868 00:38:19.646 Removing: /var/run/dpdk/spdk_pid814560 00:38:19.646 Removing: /var/run/dpdk/spdk_pid815351 00:38:19.646 Removing: /var/run/dpdk/spdk_pid816315 00:38:19.646 Removing: /var/run/dpdk/spdk_pid817102 00:38:19.646 Removing: /var/run/dpdk/spdk_pid817893 00:38:19.646 Removing: /var/run/dpdk/spdk_pid818665 00:38:19.646 Removing: /var/run/dpdk/spdk_pid823726 00:38:19.646 Removing: /var/run/dpdk/spdk_pid824059 00:38:19.646 Removing: /var/run/dpdk/spdk_pid831109 00:38:19.646 Removing: /var/run/dpdk/spdk_pid831482 00:38:19.646 Removing: /var/run/dpdk/spdk_pid838401 00:38:19.646 Removing: /var/run/dpdk/spdk_pid843537 00:38:19.646 Removing: /var/run/dpdk/spdk_pid855069 00:38:19.646 Removing: /var/run/dpdk/spdk_pid855803 00:38:19.646 Removing: /var/run/dpdk/spdk_pid860798 00:38:19.646 Removing: /var/run/dpdk/spdk_pid861201 00:38:19.646 Removing: /var/run/dpdk/spdk_pid866089 00:38:19.646 Removing: /var/run/dpdk/spdk_pid873052 00:38:19.646 Removing: /var/run/dpdk/spdk_pid876100 00:38:19.646 Removing: /var/run/dpdk/spdk_pid888615 00:38:19.646 Removing: /var/run/dpdk/spdk_pid899226 00:38:19.646 Removing: /var/run/dpdk/spdk_pid901234 00:38:19.646 Removing: /var/run/dpdk/spdk_pid902241 00:38:19.646 Removing: /var/run/dpdk/spdk_pid921552 00:38:19.646 Removing: /var/run/dpdk/spdk_pid926233 00:38:19.646 Removing: /var/run/dpdk/spdk_pid929472 00:38:19.646 Removing: /var/run/dpdk/spdk_pid937072 00:38:19.647 Removing: /var/run/dpdk/spdk_pid937183 00:38:19.647 Removing: /var/run/dpdk/spdk_pid943346 00:38:19.647 Removing: /var/run/dpdk/spdk_pid945748 00:38:19.647 Removing: /var/run/dpdk/spdk_pid948006 00:38:19.647 Removing: /var/run/dpdk/spdk_pid949450 00:38:19.647 Removing: /var/run/dpdk/spdk_pid951721 00:38:19.906 Removing: /var/run/dpdk/spdk_pid953236 00:38:19.906 Removing: /var/run/dpdk/spdk_pid962973 00:38:19.906 Removing: /var/run/dpdk/spdk_pid963550 00:38:19.906 Removing: /var/run/dpdk/spdk_pid964213 00:38:19.906 Removing: /var/run/dpdk/spdk_pid967158 00:38:19.906 Removing: /var/run/dpdk/spdk_pid967759 00:38:19.906 Removing: /var/run/dpdk/spdk_pid968208 00:38:19.906 Removing: /var/run/dpdk/spdk_pid972963 00:38:19.906 Removing: /var/run/dpdk/spdk_pid973063 00:38:19.906 Removing: /var/run/dpdk/spdk_pid974876 00:38:19.906 Removing: /var/run/dpdk/spdk_pid975312 00:38:19.906 Removing: /var/run/dpdk/spdk_pid975375 00:38:19.906 Clean 00:38:19.906 14:02:43 -- common/autotest_common.sh@1451 -- # return 0 00:38:19.906 14:02:43 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:19.906 14:02:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:19.906 14:02:43 -- common/autotest_common.sh@10 -- # set +x 00:38:19.906 14:02:43 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:19.906 14:02:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:19.906 14:02:43 -- common/autotest_common.sh@10 -- # set +x 00:38:19.906 14:02:43 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:19.906 14:02:43 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:19.906 14:02:43 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:19.906 14:02:43 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:19.906 14:02:43 -- spdk/autotest.sh@394 -- # hostname 00:38:19.906 14:02:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:20.166 geninfo: WARNING: invalid characters removed from testname! 00:38:46.747 14:03:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:48.661 14:03:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:50.573 14:03:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:51.956 14:03:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:53.868 14:03:16 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:55.253 14:03:18 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:56.635 14:03:19 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:56.635 14:03:20 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:56.635 14:03:20 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:56.635 14:03:20 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:56.635 14:03:20 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:56.635 14:03:20 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:56.896 + [[ -n 311815 ]] 00:38:56.896 + sudo kill 311815 00:38:56.907 [Pipeline] } 00:38:56.922 [Pipeline] // stage 00:38:56.926 [Pipeline] } 00:38:56.940 [Pipeline] // timeout 00:38:56.945 [Pipeline] } 00:38:56.959 [Pipeline] // catchError 00:38:56.964 [Pipeline] } 00:38:56.980 [Pipeline] // wrap 00:38:56.986 [Pipeline] } 00:38:56.999 [Pipeline] // catchError 00:38:57.008 [Pipeline] stage 00:38:57.010 [Pipeline] { (Epilogue) 00:38:57.023 [Pipeline] catchError 00:38:57.025 [Pipeline] { 00:38:57.037 [Pipeline] echo 00:38:57.039 Cleanup processes 00:38:57.044 [Pipeline] sh 00:38:57.335 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:57.335 988852 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:57.350 [Pipeline] sh 00:38:57.639 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:57.639 ++ grep -v 'sudo pgrep' 00:38:57.639 ++ awk '{print $1}' 00:38:57.639 + sudo kill -9 00:38:57.639 + true 00:38:57.652 [Pipeline] sh 00:38:57.942 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:10.191 [Pipeline] sh 00:39:10.480 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:10.480 Artifacts sizes are good 00:39:10.496 [Pipeline] archiveArtifacts 00:39:10.504 Archiving artifacts 00:39:10.676 [Pipeline] sh 00:39:11.053 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:11.069 [Pipeline] cleanWs 00:39:11.080 [WS-CLEANUP] Deleting project workspace... 00:39:11.080 [WS-CLEANUP] Deferred wipeout is used... 00:39:11.088 [WS-CLEANUP] done 00:39:11.090 [Pipeline] } 00:39:11.107 [Pipeline] // catchError 00:39:11.120 [Pipeline] sh 00:39:11.409 + logger -p user.info -t JENKINS-CI 00:39:11.419 [Pipeline] } 00:39:11.429 [Pipeline] // stage 00:39:11.435 [Pipeline] } 00:39:11.448 [Pipeline] // node 00:39:11.454 [Pipeline] End of Pipeline 00:39:11.485 Finished: SUCCESS